International Association for Cryptologic Research

International Association
for Cryptologic Research

CryptoDB

Itai Dinur

Publications

Year
Venue
Title
2021
TOSC
2021
EUROCRYPT
Cryptanalytic Applications of the Polynomial Method for Solving Multivariate Equation Systems over GF(2)
Itai Dinur
At SODA 2017 Lokshtanov et al. presented the first worst-case algorithms with exponential speedup over exhaustive search for solving polynomial equation systems of degree $d$ in $n$ variables over finite fields. These algorithms were based on the polynomial method in circuit complexity which is a technique for proving circuit lower bounds that has recently been applied in algorithm design. Subsequent works further improved the asymptotic complexity of polynomial method-based algorithms for solving equations over the field $\mathbb{F}_2$. However, the asymptotic complexity formulas of these algorithms hide significant low-order terms, and hence they outperform exhaustive search only for very large values of~$n$. In this paper, we devise a concretely efficient polynomial method-based algorithm for solving multivariate equation systems over $\mathbb{F}_2$. We analyze our algorithm's performance for solving random equation systems, and bound its complexity by about $n^2 \cdot 2^{0.815n}$ bit operations for $d = 2$ and $n^2 \cdot 2^{\left(1 - 1/2.7d\right) n}$ for any $d \geq 2$. We apply our algorithm in cryptanalysis of recently proposed instances of the Picnic signature scheme (an alternate third-round candidate in NIST's post-quantum standardization project) that are based on the security of the LowMC block cipher. Consequently, we show that 2 out of 3 new instances do not achieve their claimed security level. As a secondary application, we also improve the best-known preimage attacks on several round-reduced variants of the Keccak hash function. Our algorithm combines various techniques used in previous polynomial method-based algorithms with new optimizations, some of which exploit randomness assumptions about the system of equations. In its cryptanalytic application to Picnic, we demonstrate how to further optimize the algorithm for solving structured equation systems that are constructed from specific cryptosystems.
2021
CRYPTO
MPC-Friendly Symmetric Cryptography from Alternating Moduli: Candidates, Protocols, and Applications 📺
We study new candidates for symmetric cryptographic primitives that leverage alternation between linear functions over $\mathbb{Z}_2$ and $\mathbb{Z}_3$ to support fast protocols for secure multiparty computation (MPC). This continues the study of weak pseudorandom functions of this kind initiated by Boneh et al. (TCC 2018) and Cheon et al. (PKC 2021). We make the following contributions. (Candidates). We propose new designs of symmetric primitives based on alternating moduli. These include candidate one-way functions, pseudorandom generators, and weak pseudorandom functions. We propose concrete parameters based on cryptanalysis. (Protocols). We provide a unified approach for securely evaluating modulus-alternating primitives in different MPC models. For the original candidate of Boneh et al., our protocols obtain at least 2x improvement in all performance measures. We report efficiency benchmarks of an optimized implementation. (Applications). We showcase the usefulness of our candidates for a variety of applications. This includes short ``Picnic-style'' signature schemes, as well as protocols for oblivious pseudorandom functions, hierarchical key derivation, and distributed key generation for function secret sharing.
2020
TOSC
2020
EUROCRYPT
Tight Time-Space Lower Bounds for Finding Multiple Collision Pairs and Their Applications 📺
Itai Dinur
We consider a \emph{collision search problem} (CSP), where given a parameter $C$, the goal is to find $C$ collision pairs in a random function $f:[N] \rightarrow [N]$ (where $[N] = \{0,1,\ldots,N-1\})$ using $S$ bits of memory. Algorithms for CSP have numerous cryptanalytic applications such as space-efficient attacks on double and triple encryption. The best known algorithm for CSP is \emph{parallel collision search} (PCS) published by van Oorschot and Wiener, which achieves the time-space tradeoff $T^2 \cdot S = \tilde{O}(C^2 \cdot N)$. In this paper, we prove that any algorithm for CSP satisfies $T^2 \cdot S = \tilde{\Omega}(C^2 \cdot N)$, hence the best known time-space tradeoff is optimal (up to poly-logarithmic factors in $N$). On the other hand, we give strong evidence that proving similar unconditional time-space tradeoff lower bounds on CSP applications (such as breaking double and triple encryption) may be very difficult, and would imply a breakthrough in complexity theory. Hence, we propose a new restricted model of computation and prove that under this model, the best known time-space tradeoff attack on double encryption is optimal.
2020
EUROCRYPT
On the Streaming Indistinguishability of a Random Permutation and a Random Function 📺
Itai Dinur
An adversary with $S$ bits of memory obtains a stream of $Q$ elements that are uniformly drawn from the set $\{1,2,\ldots,N\}$, either with or without replacement. This corresponds to sampling $Q$ elements using either a random function or a random permutation. The adversary's goal is to distinguish between these two cases. This problem was first considered by Jaeger and Tessaro (EUROCRYPT 2019), which proved that the adversary's advantage is upper bounded by $\sqrt{Q \cdot S/N}$. Jaeger and Tessaro used this bound as a streaming switching lemma which allowed proving that known time-memory tradeoff attacks on several modes of operation (such as counter-mode) are optimal up to a factor of $O(\log N)$ if $Q \cdot S \approx N$. However, the bound's proof assumed an unproven combinatorial conjecture. Moreover, if $Q \cdot S \ll N$ there is a gap between the upper bound of $\sqrt{Q \cdot S/N}$ and the $Q \cdot S/N$ advantage obtained by known attacks. In this paper, we prove a tight upper bound (up to poly-logarithmic factors) of $O(\log Q \cdot Q \cdot S/N)$ on the adversary's advantage in the streaming distinguishing problem. The proof does not require a conjecture and is based on a hybrid argument that gives rise to a reduction from the unique-disjointness communication complexity problem to streaming.
2020
CRYPTO
Out of Oddity -- New Cryptanalytic Techniques against Symmetric Primitives Optimized for Integrity Proof Systems 📺
The security and performance of many integrity proof systems like SNARKs, STARKs and Bulletproofs highly depend on the underlying hash function. For this reason several new proposals have recently been developed. These primitives obviously require an in-depth security evaluation, especially since their implementation constraints have led to less standard design approaches. This work compares the security levels offered by two recent families of such primitives, namely GMiMC and HadesMiMC. We exhibit low-complexity distinguishers against the GMiMC and HadesMiMC permutations for most parameters proposed in recently launched public challenges for STARK-friendly hash functions. In the more concrete setting of the sponge construction corresponding to the practical use in the ZK-STARK protocol, we present a practical collision attack on a round-reduced version of GMiMC and a preimage attack on some instances of HadesMiMC. To achieve those results, we adapt and generalize several cryptographic techniques to fields of odd characteristic.
2019
EUROCRYPT
Linear Equivalence of Block Ciphers with Partial Non-Linear Layers: Application to LowMC
$$\textsc {LowMC}$$LOWMC is a block cipher family designed in 2015 by Albrecht et al. It is optimized for practical instantiations of multi-party computation, fully homomorphic encryption, and zero-knowledge proofs. $$\textsc {LowMC}$$LOWMC is used in the $$\textsc {Picnic}$$PICNIC signature scheme, submitted to NIST’s post-quantum standardization project and is a substantial building block in other novel post-quantum cryptosystems. Many $$\textsc {LowMC}$$LOWMC instances use a relatively recent design strategy (initiated by Gérard et al. at CHES 2013) of applying the non-linear layer to only a part of the state in each round, where the shortage of non-linear operations is partially compensated by heavy linear algebra. Since the high linear algebra complexity has been a bottleneck in several applications, one of the open questions raised by the designers was to reduce it, without introducing additional non-linear operations (or compromising security).In this paper, we consider $$\textsc {LowMC}$$LOWMC instances with block size n, partial non-linear layers of size $$s \le n$$s≤n and r encryption rounds. We redesign LowMC’s linear components in a way that preserves its specification, yet improves LowMC’s performance in essentially every aspect. Most of our optimizations are applicable to all SP-networks with partial non-linear layers and shed new light on this relatively new design methodology.Our main result shows that when $$s < n$$s<n, each $$\textsc {LowMC}$$LOWMC instance belongs to a large class of equivalent instances that differ in their linear layers. We then select a representative instance from this class for which encryption (and decryption) can be implemented much more efficiently than for an arbitrary instance. This yields a new encryption algorithm that is equivalent to the standard one, but reduces the evaluation time and storage of the linear layers from $$r \cdot n^2$$r·n2 bits to about $$r \cdot n^2 - (r-1)(n-s)^2$$r·n2-(r-1)(n-s)2. Additionally, we reduce the size of LowMC’s round keys and constants and optimize its key schedule and instance generation algorithms. All of these optimizations give substantial improvements for small s and a reasonable choice of r. Finally, we formalize the notion of linear equivalence of block ciphers and prove the optimality of some of our results.Comprehensive benchmarking of our optimizations in various $$\textsc {LowMC}$$LOWMC applications (such as $$\textsc {Picnic}$$PICNIC) reveals improvements by factors that typically range between 2x and 40x in runtime and memory consumption.
2019
EUROCRYPT
Multi-target Attacks on the Picnic Signature Scheme and Related Protocols 📺
Itai Dinur Niv Nadler
Picnic is a signature scheme that was presented at ACM CCS 2017 by Chase et al. and submitted to NIST’s post-quantum standardization project. Among all submissions to NIST’s project, Picnic is one of the most innovative, making use of recent progress in construction of practically efficient zero-knowledge (ZK) protocols for general circuits.In this paper, we devise multi-target attacks on Picnic and its underlying ZK protocol, ZKB++. Given access to S signatures, produced by a single or by several users, our attack can (information theoretically) recover the $$\kappa $$-bit signing key of a user in complexity of about $$2^{\kappa - 7}/S$$. This is faster than Picnic’s claimed $$2^{\kappa }$$ security against classical (non-quantum) attacks by a factor of $$2^7 \cdot S$$ (as each signature contains about $$2^7$$ attack targets).Whereas in most multi-target attacks, the attacker can easily sort and match the available targets, this is not the case in our attack on Picnic, as different bits of information are available for each target. Consequently, it is challenging to reach the information theoretic complexity in a computational model, and we had to perform cryptanalytic optimizations by carefully analyzing ZKB++ and its underlying circuit. Our best attack for $$\kappa = 128$$ has time complexity of $$T = 2^{77}$$ for $$S = 2^{64}$$. Alternatively, we can reach the information theoretic complexity of $$T = 2^{64}$$ for $$S = 2^{57}$$, given that all signatures are produced with the same signing key.Our attack exploits a weakness in the way that the Picnic signing algorithm uses a pseudo-random generator. The weakness is fixed in the recent Picnic 2.0 version.In addition to our attack on Picnic, we show that a recently proposed improvement of the ZKB++ protocol (due to Katz, Kolesnikov and Wang) is vulnerable to a similar multi-target attack.
2019
JOFC
Efficient Dissection of Bicomposite Problems with Cryptanalytic Applications
In this paper, we show that a large class of diverse problems have a bicomposite structure which makes it possible to solve them with a new type of algorithm called dissection , which has much better time/memory tradeoffs than previously known algorithms. A typical example is the problem of finding the key of multiple encryption schemes with r independent n -bit keys. All the previous error-free attacks required time T and memory M satisfying $$\textit{TM} = 2^{rn}$$ TM = 2 rn , and even if “false negatives” are allowed, no attack could achieve $$\textit{TM}<2^{3rn/4}$$ TM < 2 3 r n / 4 . Our new technique yields the first algorithm which never errs and finds all the possible keys with a smaller product of $$\textit{TM}$$ TM , such as $$T=2^{4n}$$ T = 2 4 n time and $$M=2^{n}$$ M = 2 n memory for breaking the sequential execution of $$\hbox {r}=7$$ r = 7 block ciphers. The improvement ratio we obtain increases in an unbounded way as r increases, and if we allow algorithms which can sometimes miss solutions, we can get even better tradeoffs by combining our dissection technique with parallel collision search. To demonstrate the generality of the new dissection technique, we show how to use it in a generic way in order to improve rebound attacks on hash functions and to solve with better time complexities (for small memory complexities) hard combinatorial search problems, such as the well-known knapsack problem.
2019
JOFC
An Optimal Distributed Discrete Log Protocol with Applications to Homomorphic Secret Sharing
The distributed discrete logarithm (DDL) problem was introduced by Boyle, Gilboa and Ishai at CRYPTO 2016. A protocol solving this problem was the main tool used in the share conversion procedure of their homomorphic secret sharing (HSS) scheme which allows non-interactive evaluation of branching programs among two parties over shares of secret inputs. Let g be a generator of a multiplicative group $${\mathbb {G}}$$ G . Given a random group element $$g^{x}$$ g x and an unknown integer $$b \in [-M,M]$$ b ∈ [ - M , M ] for a small M , two parties A and B (that cannot communicate) successfully solve DDL if $$A(g^{x}) - B(g^{x+b}) = b$$ A ( g x ) - B ( g x + b ) = b . Otherwise, the parties err. In the DDL protocol of Boyle et al., A and B run in time T and have error probability that is roughly linear in M  /  T . Since it has a significant impact on the HSS scheme’s performance, a major open problem raised by Boyle et al. was to reduce the error probability as a function of T . In this paper we devise a new DDL protocol that substantially reduces the error probability to $$O(M \cdot T^{-2})$$ O ( M · T - 2 ) . Our new protocol improves the asymptotic evaluation time complexity of the HSS scheme by Boyle et al. on branching programs of size S from $$O(S^2)$$ O ( S 2 ) to $$O(S^{3/2})$$ O ( S 3 / 2 ) . We further show that our protocol is optimal up to a constant factor for all relevant cryptographic group families, unless one can solve the discrete logarithm problem in a short interval of length R in time $$o(\sqrt{R})$$ o ( R ) . Our DDL protocol is based on a new type of random walk that is composed of several iterations in which the expected step length gradually increases. We believe that this random walk is of independent interest and will find additional applications.
2019
JOFC
Cryptanalytic Time–Memory–Data Trade-offs for FX-Constructions and the Affine Equivalence Problem
Itai Dinur
The FX-construction was proposed in 1996 by Kilian and Rogaway as a generalization of the DESX scheme. The construction increases the security of an n -bit core block cipher with a $$\kappa $$ κ -bit key by using two additional n -bit masking keys. Recently, several concrete instances of the FX-construction were proposed, including PRINCE, PRIDE and MANTIS (presented at ASIACRYPT 2012, CRYPTO 2014 and CRYPTO 2016, respectively). In this paper, we devise new cryptanalytic time–memory–data trade-off attacks on FX-constructions. By fine-tuning the parameters to the recent FX-construction proposals, we show that the security margin of these ciphers against practical attacks is smaller than expected. Our techniques combine a special form of time–memory–data trade-offs, typically applied to stream ciphers, with a cryptanalytic technique by Fouque, Joux and Mavromati. In the final part of the paper, we show that the techniques we use in cryptanalysis of the FX-construction are applicable to additional schemes. In particular, we use related methods in order to devise new time–memory trade-offs for solving the affine equivalence problem. In this problem, the input consists of two functions $$F,G: \{0,1\}^n \rightarrow \{0,1\}^n$$ F , G : { 0 , 1 } n → { 0 , 1 } n , and the goal is to determine whether there exist invertible affine transformations $$A_1,A_2$$ A 1 , A 2 over $$GF(2)^n$$ G F ( 2 ) n such that $$G = A_2 \circ F \circ A_1$$ G = A 2 ∘ F ∘ A 1 .
2019
JOFC
Generic Attacks on Hash Combiners
Hash combiners are a practical way to make cryptographic hash functions more tolerant to future attacks and compatible with existing infrastructure. A combiner combines two or more hash functions in a way that is hopefully more secure than each of the underlying hash functions, or at least remains secure as long as one of them is secure. Two classical hash combiners are the exclusive-or (XOR) combiner $$ \mathcal {H}_1(M) \oplus \mathcal {H}_2(M) $$ H 1 ( M ) ⊕ H 2 ( M ) and the concatenation combiner $$ \mathcal {H}_1(M) \Vert \mathcal {H}_2(M) $$ H 1 ( M ) ‖ H 2 ( M ) . Both of them process the same message using the two underlying hash functions in parallel. Apart from parallel combiners, there are also cascade constructions sequentially calling the underlying hash functions to process the message repeatedly, such as Hash-Twice $$\mathcal {H}_2(\mathcal {H}_1(IV, M), M)$$ H 2 ( H 1 ( I V , M ) , M ) and the Zipper hash $$\mathcal {H}_2(\mathcal {H}_1(IV, M), \overleftarrow{M})$$ H 2 ( H 1 ( I V , M ) , M ← ) , where $$\overleftarrow{M}$$ M ← is the reverse of the message M . In this work, we study the security of these hash combiners by devising the best-known generic attacks. The results show that the security of most of the combiners is not as high as commonly believed. We summarize our attacks and their computational complexities (ignoring the polynomial factors) as follows: 1. Several generic preimage attacks on the XOR combiner: A first attack with a best-case complexity of $$ 2^{5n/6} $$ 2 5 n / 6 obtained for messages of length $$ 2^{n/3} $$ 2 n / 3 . It relies on a novel technical tool named interchange structure. It is applicable for combiners whose underlying hash functions follow the Merkle–Damgård construction or the HAIFA framework. A second attack with a best-case complexity of $$ 2^{2n/3} $$ 2 2 n / 3 obtained for messages of length $$ 2^{n/2} $$ 2 n / 2 . It exploits properties of functional graphs of random mappings. It achieves a significant improvement over the first attack but is only applicable when the underlying hash functions use the Merkle–Damgård construction. An improvement upon the second attack with a best-case complexity of $$ 2^{5n/8} $$ 2 5 n / 8 obtained for messages of length $$ 2^{5n/8} $$ 2 5 n / 8 . It further exploits properties of functional graphs of random mappings and uses longer messages. These attacks show a rather surprising result: regarding preimage resistance, the sum of two n -bit narrow-pipe hash functions following the considered constructions can never provide n -bit security. 2. A generic second-preimage attack on the concatenation combiner of two Merkle–Damgård hash functions. This attack finds second preimages faster than $$ 2^n $$ 2 n for challenges longer than $$ 2^{2n/7} $$ 2 2 n / 7 and has a best-case complexity of $$ 2^{3n/4} $$ 2 3 n / 4 obtained for challenges of length $$ 2^{3n/4} $$ 2 3 n / 4 . It also exploits properties of functional graphs of random mappings. 3. The first generic second-preimage attack on the Zipper hash with underlying hash functions following the Merkle–Damgård construction. The best-case complexity is $$ 2^{3n/5} $$ 2 3 n / 5 , obtained for challenge messages of length $$ 2^{2n/5} $$ 2 2 n / 5 . 4. An improved generic second-preimage attack on Hash-Twice with underlying hash functions following the Merkle–Damgård construction. The best-case complexity is $$ 2^{13n/22} $$ 2 13 n / 22 , obtained for challenge messages of length $$ 2^{13n/22} $$ 2 13 n / 22 . The last three attacks show that regarding second-preimage resistance, the concatenation and cascade of two n -bit narrow-pipe Merkle–Damgård hash functions do not provide much more security than that can be provided by a single n -bit hash function. Our main technical contributions include the following: 1. The interchange structure, which enables simultaneously controlling the behaviours of two hash computations sharing the same input. 2. The simultaneous expandable message, which is a set of messages of length covering a whole appropriate range and being multi-collision for both of the underlying hash functions. 3. New ways to exploit the properties of functional graphs of random mappings generated by fixing the message block input to the underlying compression functions.
2018
EUROCRYPT
2018
CRYPTO
An Optimal Distributed Discrete Log Protocol with Applications to Homomorphic Secret Sharing 📺
The distributed discrete logarithm (DDL) problem was introduced by Boyle et al. at CRYPTO 2016. A protocol solving this problem was the main tool used in the share conversion procedure of their homomorphic secret sharing (HSS) scheme which allows non-interactive evaluation of branching programs among two parties over shares of secret inputs.Let g be a generator of a multiplicative group $$\mathbb {G}$$G. Given a random group element $$g^{x}$$gx and an unknown integer $$b \in [-M,M]$$b∈[-M,M] for a small M, two parties A and B (that cannot communicate) successfully solve DDL if $$A(g^{x}) - B(g^{x+b}) = b$$A(gx)-B(gx+b)=b. Otherwise, the parties err. In the DDL protocol of Boyle et al., A and B run in time T and have error probability that is roughly linear in M/T. Since it has a significant impact on the HSS scheme’s performance, a major open problem raised by Boyle et al. was to reduce the error probability as a function of T.In this paper we devise a new DDL protocol that substantially reduces the error probability to $$O(M \cdot T^{-2})$$O(M·T-2). Our new protocol improves the asymptotic evaluation time complexity of the HSS scheme by Boyle et al. on branching programs of size S from $$O(S^2)$$O(S2) to $$O(S^{3/2})$$O(S3/2). We further show that our protocol is optimal up to a constant factor for all relevant cryptographic group families, unless one can solve the discrete logarithm problem in a short interval of length R in time $$o(\sqrt{R})$$o(R).Our DDL protocol is based on a new type of random walk that is composed of several iterations in which the expected step length gradually increases. We believe that this random walk is of independent interest and will find additional applications.
2017
CRYPTO
2016
EUROCRYPT
2016
CRYPTO
2016
JOFC
2015
EPRINT
2015
EPRINT
2015
EPRINT
2015
EPRINT
2015
EUROCRYPT
2015
EUROCRYPT
2015
EUROCRYPT
2015
CRYPTO
2015
ASIACRYPT
2014
CRYPTO
2014
EPRINT
2014
EPRINT
2014
EPRINT
2014
EPRINT
2014
EPRINT
2014
JOFC
2014
ASIACRYPT
2014
FSE
2014
FSE
2013
ASIACRYPT
2013
FSE
2012
CRYPTO
2012
FSE
2012
FSE
2011
FSE
2011
FSE
2011
ASIACRYPT
2009
EUROCRYPT
2009
FSE
2008
EPRINT
Cube Attacks on Tweakable Black Box Polynomials
Itai Dinur Adi Shamir
Almost any cryptographic scheme can be described by \emph{tweakable polynomials} over $GF(2)$, which contain both secret variables (e.g., key bits) and public variables (e.g., plaintext bits or IV bits). The cryptanalyst is allowed to tweak the polynomials by choosing arbitrary values for the public variables, and his goal is to solve the resultant system of polynomial equations in terms of their common secret variables. In this paper we develop a new technique (called a \emph{cube attack}) for solving such tweakable polynomials, which is a major improvement over several previously published attacks of the same type. For example, on the stream cipher Trivium with a reduced number of initialization rounds, the best previous attack (due to Fischer, Khazaei, and Meier) requires a barely practical complexity of $2^{55}$ to attack $672$ initialization rounds, whereas a cube attack can find the complete key of the same variant in $2^{19}$ bit operations (which take less than a second on a single PC). Trivium with $735$ initialization rounds (which could not be attacked by any previous technique) can now be broken with $2^{30}$ bit operations. Trivium with $767$ initialization rounds can now be broken with $2^{45}$ bit operations, and the complexity of the attack can almost certainly be further reduced to about $2^{36}$ bit operations. Whereas previous attacks were heuristic, had to be adapted to each cryptosystem, had no general complexity bounds, and were not expected to succeed on random looking polynomials, cube attacks are provably successful when applied to random polynomials of degree $d$ over $n$ secret variables whenever the number $m$ of public variables exceeds $d+log_dn$. Their complexity is $2^{d-1}n+n^2$ bit operations, which is polynomial in $n$ and amazingly low when $d$ is small. Cube attacks can be applied to any block cipher, stream cipher, or MAC which is provided as a black box (even when nothing is known about its internal structure) as long as at least one output bit can be represented by (an unknown) polynomial of relatively low degree in the secret and public variables.

Program Committees

Eurocrypt 2019
FSE 2019
FSE 2018
Eurocrypt 2017
FSE 2017
Crypto 2016
FSE 2016
FSE 2015
Asiacrypt 2014