International Association for Cryptologic Research

IACR News Central

Here you can see all recent updates to the IACR webpage. These updates are also available:

Now viewing news items related to:

4 June 2018
ePrint Report Proofs of Work from Worst-Case Assumptions Marshall Ball, Alon Rosen, Manuel Sabin, Prashant Nalini Vasudevan
We give Proofs of Work (PoWs) whose hardness is based on well-studied worst-case assumptions from fine-grained complexity theory. This extends the work of (Ball et al., STOC '17), that presents PoWs that are based on the Orthogonal Vectors, 3SUM, and All-Pairs Shortest Path problems. These, however, were presented as a `proof of concept' of provably secure PoWs and did not fully meet the requirements of a conventional PoW: namely, it was not shown that multiple proofs could not be generated faster than generating each individually. We use the considerable algebraic structure of these PoWs to prove that this non-amortizability of multiple proofs does in fact hold and further show that the PoWs' structure can be exploited in ways previous heuristic PoWs could not.

This creates full PoWs that are provably hard from worst-case assumptions (previously, PoWs were either only based on heuristic assumptions or on much stronger cryptographic assumptions (Bitansky et al., ITCS '16)) while still retaining significant structure to enable extra properties of our PoWs. Namely, we show that the PoWs of (Ball et al, STOC '17) can be modified to have much faster verification time, can be proved in zero knowledge, and more.

Finally, as our PoWs are based on evaluating low-degree polynomials originating from average-case fine-grained complexity, we prove an average-case direct sum theorem for the problem of evaluating these polynomials, which may be of independent interest. For our context, this implies the required non-amortizability of our PoWs.
Often the simplest way of specifying game-based cryptographic definitions is apparently barred because the adversary would have some trivial win. Disallowing or invalidating these wins can lead to complex or unconvincing definitions. We suggest a generic way around this difficulty. We call it indistinguishability up to correctness, or IND|C. Given games G and H and a correctness condition C we define an advantage measure Adv_{G,H,C}^indc wherein G/H distinguishing attacks are effaced to the extent that they are inevitable due to C. We formalize this in the language of oracle silencing, an alternative to exclusion-style and penalty-style definitions. We apply our ideas to a domain where game-based definitions have been cumbersome: stateful authenticated-encryption (sAE). We rework existing sAE notions and encompass new ones, like replay-free AE permitting a specified degree of out-of-order message delivery.
ePrint Report Non-Interactive Zero-Knowledge Proofs for Composite Statements Shashank Agrawal, Chaya Ganesh, Payman Mohassel
The two most common ways to design non-interactive zero-knowledge (NIZK) proofs are based on Sigma protocols and QAP-based SNARKs. The former is highly efficient for proving algebraic statements while the latter is superior for arithmetic representations.

Motivated by applications such as privacy-preserving credentials and privacy-preserving audits in cryptocurrencies, we study the design of NIZKs for composite statements that compose algebraic and arithmetic statements in arbitrary ways. Specifically, we provide a framework for proving statements that consist of ANDs, ORs and function compositions of a mix of algebraic and arithmetic components. This allows us to explore the full spectrum of trade-offs between proof size, prover cost, and CRS size/generation cost. This leads to proofs for statements of the form: knowledge of $x$ such that $SHA(g^x)=y$ for some public $y$ where the prover's work is 500 times fewer exponentiations compared to a QAP-based SNARK at the cost of increasing the proof size to 2404 group and field elements. In application to anonymous credentials, our techniques result in 8 times fewer exponentiations for the prover at the cost of increasing the proof size to 298 elements.
Format-preserving encryption (FPE) produces ciphertexts which have the same format as the plaintexts. Building secure FPE is very challenging, and recent attacks (Bellare, Hoang, Tessaro, CCS '16; Durak and Vaudenay, CRYPTO '17) have highlighted security deficiencies in the recent NIST SP800-38G standard. This has left the question open of whether practical schemes with high security exist.

In this paper, we continue the investigation of attacks against FPE schemes. Our first contribution are new known-plaintext message recovery attacks against Feistel-based FPEs (such as FF1/FF3 from the NIST SP800-38G standard) which improve upon previous work in terms of amortized complexity in multi-target scenarios, where multiple ciphertexts are to be decrypted. Our attacks are also qualitatively better in that they make no assumptions on the correlation between the targets to be decrypted and the known plaintexts. We also surface a new vulnerability specific to FF3 and how it handles odd length domains, which leads to a substantial speedup in our attacks.

We also show the first attacks against non-Feistel based FPEs. Specifically, we show a strong message-recovery attack for FNR, a construction proposed by Cisco which replaces two rounds in the Feistel construction with a pairwise-independent permutation, following the paradigm by Naor and Reingold (JoC, '99). We also provide a strong ciphertext-only attack against a variant of the DTP construction by Brightwell and Smith, which is deployed by Protegrity within commercial applications.

All of our attacks show that existing constructions fall short of achieving desirable security levels. For Feistel and the FNR schemes, our attacks become feasible on small domains, e.g., 8 bits, for suggested round numbers. Our attack against the DTP construction is practical even for large domains. We provide proof-of-concept implementations of our attacks that verify our theoretical findings.
ePrint Report Limits on the Power of Garbling Techniques for Public-Key Encryption Sanjam Garg, Mohammad Hajiabadi, Mohammad Mahmoody, Ameer Mohammed
Understanding whether public-key encryption can be based on one-way functions is a fundamental open problem in cryptography. The seminal work of Impagliazzo and Rudich [STOC'89] shows that black-box constructions of public-key encryption from one-way functions are impossible. However, this impossibility result leaves open the possibility of using non-black-box techniques for achieving this goal.

One of the most powerful classes of non-black-box techniques, which can be based on one-way functions (OWFs) alone, is Yao's garbled circuit technique [FOCS'86]. As for the non-black-box power of this technique, the recent work of Dottling and Garg [CRYPTO'17] shows that the use of garbling allows us to circumvent known black-box barriers in the context of identity-based encryption.

We prove that garbling of circuits that have OWF (or even random oracle) gates in them are insufficient for obtaining public-key encryption. Additionally, we show that this model also captures (non-interactive) zero-knowledge proofs for relations with OWF gates. This indicates that currently known OWF-based non-black-box techniques are perhaps insufficient for realizing public-key encryption.
We introduce a new class of irreducible pentanomials over $\mathbb{F}_2$ of the form $f(x) = x^{2b+c} + x^{b+c} + x^b + x^c + 1$. Let $m=2b+c$ and use $f$ to define the finite field extension of degree $m$. We give the exact number of operations required for computing the reduction modulo $f$. We also provide a multiplier based on Karatsuba algorithm in $\mathbb{F}_2[x]$ combined with our reduction process. We give the total cost of the multiplier and found that the bit-parallel multiplier defined by this new class of polynomials has improved XOR and AND complexity. Our multiplier has comparable time delay when compared to other multipliers based on Karatsuba algorithm.
We aim to understand the best possible security of a (bidirectional) cryptographic channel against an adversary that may arbitrarily and repeatedly learn the secret state of either communicating party. We give a formal security definition and a proven-secure construction. This construction provides better security against state compromise than the Signal Double Ratchet Algorithm or any other known channel construction. To facilitate this we define and construct new forms of public-key encryption and digital signatures that update their keys over time.
ePrint Report On the Complexity of Compressing Obfuscation Gilad Asharov, Naomi Ephraim, Ilan Komargodski, Rafael Pass
Indistinguishability obfuscation has become one of the most exciting cryptographic primitives due to its far reaching applications in cryptography and other fields. However, to date, obtaining a plausibly secure construction has been an illusive task, thus motivating the study of seemingly weaker primitives that imply it, with the possibility that they will be easier to construct.

In this work, we provide a systematic study of compressing obfuscation, one of the most natural and simple to describe primitives that is known to imply indistinguishability obfuscation when combined with other standard assumptions. A compressing obfuscator is roughly an indistinguishability obfuscator that outputs just a slightly compressed encoding of the truth table. This generalizes notions introduced by Lin et al. (PKC 2016) and Bitansky et al. (TCC 2016) by allowing for a broader regime of parameters.

We view compressing obfuscation as an independent cryptographic primitive and show various positive and negative results concerning its power and plausibility of existence, demonstrating significant differences from full-fledged indistinguishability obfuscation.

First, we show that as a cryptographic building block, compressing obfuscation is weak. In particular, when combined with one-way functions, it cannot be used (in a black-box way) to achieve public-key encryption, even under (sub-)exponential security assumptions. This is in sharp contrast to indistinguishability obfuscation, which together with one-way functions implies almost all cryptographic primitives.

Second, we show that to construct compressing obfuscation with perfect correctness, one only needs to assume its existence with a very weak correctness guarantee and polynomial hardness. Namely, we show a correctness amplification transformation with optimal parameters that relies only on polynomial hardness assumptions. This implies a universal construction assuming only polynomially secure compressing obfuscation with approximate correctness. In the context of indistinguishability obfuscation, we know how to achieve such a result only under sub-exponential security assumptions together with derandomization assumptions.

Lastly, we characterize the existence of compressing obfuscation with statistical security. We show that in some range of parameters and for some classes of circuits such an obfuscator exists, whereas it is unlikely to exist with better parameters or for larger classes of circuits. These positive and negative results reveal a deep connection between compressing obfuscation and various concepts in complexity theory and learning theory.
ePrint Report Structured Encryption and Leakage Suppression Seny Kamara, Tarik Moataz, Olga Ohrimenko
Structured encryption (STE) schemes encrypt data structures in such a way that they can be privately queried. One aspect of STE that is still poorly understood is its leakage. In this work, we describe a general framework to design STE schemes that do not leak the query/search pattern (i.e., if and when a query was previously made).

Our framework consists of two compilers. The first can be used to make any dynamic STE scheme rebuildable in the sense that the encrypted structures it produces can be rebuilt efficiently using only O(1) client storage. The second transforms any rebuildable scheme that leaks the query/search pattern into a new scheme that does not. Our second compiler is a generalization of Goldreich and Ostrovsky's square root oblivious RAM (ORAM) solution but does not make use of black-box ORAM simulation. We show that our framework produces STE schemes with query complexity that is asymptotically better than ORAM simulation in certain (natural) settings and comparable to special-purpose oblivious data structures.

We use our framework to design a new STE scheme that is ``almost" zero-leakage in the sense that it reveals an, intuitively-speaking, small amount of information. We also show how the scheme can be used to achieve zero-leakage queries when one can tolerate a probabilistic guarantee of correctness. This construction results from applying our compilers to a new STE scheme we design called the piggyback scheme. This scheme is a general-purpose STE construction (in the sense that it can encrypt any data structure) that leaks the search/query pattern but hides the response length on non-repeating queries.
Rank estimation is an important tool for a side-channel evaluations laboratories. It allows estimating the remaining security after an attack has been performed, quantified as the time complexity and the memory consumption required to brute force the key given the leakages as probability distributions over $d$ subkeys (usually key bytes). These estimations are particularly useful where the key is not reachable with exhaustive search. We propose a new method called PRank for rank estimation, that is conceptually simple, and more time and memory efficient than previous proposals. Our main idea is to bound each subkey distribution by a Pareto-like function: since these are analytical functions, we can then estimate the rank by a closed formula. We evaluated the performance of PRank through extensive simulations based on two real SCA data corpora, and compared it to the currently-best histogram-based algorithm. We show that PRank gives a good rank estimation with much improved time and memory efficiency, especially for large ranks: For ranks between $2^{80}-2^{100}$ PRank estimation is at most 10 bits above the histogram rank and for ranks beyond $2^{100}$ the PRank estimation is only 4 bits above the histogram rank---yet it runs faster, and uses negligible memory. PRank gives a new and interesting method to solve the rank estimation problem based on reduction to analytical functions and calculating one closed formula hence using negligible time and space.
ePrint Report Adaptive Garbled RAM from Laconic Oblivious Transfer Sanjam Garg, Rafail Ostrovsky, Akshayaram Srinivasan
We give a construction of an adaptive garbled RAM scheme. In the adaptive setting, a client first garbles a ``large'' persistent database which is stored on a server. Next, the client can provide multiple adaptively and adversarially chosen RAM garbled programs that execute and modify the stored database arbitrarily. The garbled database and the garbled program should reveal nothing more than the running time and the output of the computation. Furthermore, the sizes of the garbled database and the garbled program grow only linearly in the size of the database and the running time of the executed program respectively (up to polylogarithmic factors). The security of our construction is based on the assumption that laconic oblivious transfer (Cho et al., CRYPTO 2017) exists. Previously, such adaptive garbled RAM constructions were only known using indistinguishability obfuscation or in random oracle model. As an additional application, we note that this work yields the first constant round secure computation protocol for persistent RAM programs in the malicious setting from standard assumptions. Prior works did not support persistence in the malicious setting.
ePrint Report From Laconic Zero-Knowledge to Public-Key Cryptography Itay Berman, Akshay Degwekar, Ron D. Rothblum, Prashant Nalini Vasudevan
Since its inception, public-key encryption (PKE) has been one of the main cornerstones of cryptography. A central goal in cryptographic research is to understand the foundations of public-key encryption and in particular, base its existence on a natural and generic complexity-theoretic assumption. An intriguing candidate for such an assumption is the existence of a cryptographically hard language in the intersection of NP and SZK.

In this work we prove that public-key encryption can be based on the foregoing assumption, as long as the (honest) prover in the zero-knowledge protocol is efficient and laconic. That is, messages that the prover sends should be efficiently computable (given the NP witness) and short (i.e., of sufficiently sub-logarithmic length). Actually, our result is stronger and only requires the protocol to be zero-knowledge for an honest-verifier and sound against computationally bounded cheating provers.

Languages in NP with such laconic zero-knowledge protocols are known from a variety of computational assumptions (e.g., Quadratic Residuocity, Decisional Diffie-Hellman, Learning with Errors, etc.). Thus, our main result can also be viewed as giving a unifying framework for constructing PKE which, in particular, captures many of the assumptions that were already known to yield PKE.

We also show several extensions of our result. First, that a certain weakening of our assumption on laconic zero-knowledge is actually equivalent to PKE, thereby giving a complexity-theoretic characterization of PKE. Second, a mild strengthening of our assumption also yields a (2-message) oblivious transfer protocol.
We study Authenticated Encryption with Associated Data (AEAD) from the viewpoint of composition in arbitrary (single-stage) environments. We use the indifferentiability framework to formalize the intuition that a good AEAD scheme should have random ciphertexts subject to decryptability. Within this framework, we can then apply the indifferentiability composition theorem to show that such schemes offer extra safeguards wherever the relevant security properties are not known, or cannot be predicted in advance, as in general-purpose crypto libraries and standards.

We show, on the negative side, that generic composition (in many of its configurations) and well-known classical and recent schemes fail to achieve indifferentiability. On the positive side, we give a provably indifferentiable Feistel-based construction, which reduces the round complexity from at least $6$, needed for blockciphers, to only 3 for encryption. This result is not too far off the theoretical optimum as we give a lower bound that rules out the indifferentiability of any construction with less than 2 rounds.
ePrint Report Quantum Lattice Enumeration and Tweaking Discrete Pruning Yoshinori Aono, Phong Q. Nguyen, Yixin Shen
Enumeration is a fundamental lattice algorithm used in challenge records. We show how to speed up enumeration on a quantum computer, which affects the security estimates of several lattice-based submissions to NIST: if $T$ is the number of operations of enumeration, our quantum enumeration runs in roughly $\sqrt{T}$ operations. This applies to the two most efficient forms of enumeration known in the extreme pruning setting: cylinder pruning but also discrete pruning introduced at Eurocrypt '17. Our results are based on recent quantum tree algorithms by Montanaro and Ambainis-Kokainis. The discrete pruning case requires a crucial tweak: we modify the preprocessing so that the running time can be rigorously proved to be essentially optimal, which was the main open problem in discrete pruning. We also introduce another tweak to solve the more general problem of finding close lattice vectors.
Emerging technologies with the requirement of small size and portability, such as Internet-of-Things devices, represent a good target for physical attacks, e.g., fault attacks. These attacks often aim at revealing secrets used in cryptographic algorithms, which are the essential building block for communication protocols. Differential Fault Analysis (DFA) is considered as the most popular fault analysis method. While there are techniques that provide a fault analysis automation on the cipher level to some degree, it can be shown that when it comes to software implementations, there are new vulnerabilities, which cannot be found by observing the cipher design specification.

This work bridges the gap by providing a fully automated way to carry out DFA on assembly implementations of symmetric block ciphers. We use a customized data flow graph to represent the program and develop a novel fault analysis methodology to capture the program behavior under faults. We establish an effective description of DFA as constraints that are passed to an SMT solver. We create a tool that takes assembly code as input, analyzes the dependencies among instructions, automatically attacks vulnerable instructions using SMT solver and outputs the attack details that recover the last round key (and possibly the earlier keys). We support our design with evaluations on lightweight ciphers SIMON, SPECK, and PRIDE, and a current NIST standard, AES. By automated assembly analysis, we were able to find new efficient DFA attacks on SIMON, SPECK and PRIDE, exploiting implementation specific vulnerabilities, and a previously published DFA on AES. Moreover, we present a novel DFA on multiplication operation that has never been shown for symmetric block ciphers before. Our experimental evaluation also shows reasonable execution times that are scalable to current cipher designs and can easily outclass the manual analysis.

We note that this is the first work that automatically carries out DFA on cipher implementations without any plaintext or ciphertext information and therefore, can be generally applied to any input data to the cipher.
ePrint Report Pseudorandom Quantum States Zhengfeng Ji, Yi-Kai Liu, Fang Song
We propose the concept of pseudorandom quantum states, which appear random to any quantum polynomial-time adversary. This offers a computational approximation to perfect randomness on quantum states (analogous to a cryptographic pseudorandom generator), as apposed to some statistical notion of quantum pseudorandomness in the literature, such as quantum t-designs (analogous to t-wise independent distributions).

Under the assumption that quantum-secure one-way functions exist, we present efficient constructions of pseudorandom states, showing feasibility of our definition. We then prove several basic properties of any pseudorandom states, which further back up our definition. First, we show a cryptographic no-cloning theorem that no efficient quantum algorithm that can create additional copies from any polynomial-many copies of pseudorandom states. Second, as expected for random quantum states, we show that pseudorandom quantum states are highly entangled on average. Finally, as a main application, we prove that any family of pseudorandom states naturally gives rise to a private-key quantum money scheme, thanks to our cryptographic no-cloning theorem.
Tight security is increasingly gaining importance in real-world cryptography, as it allows to choose cryptographic parameters in a way that is supported by a security proof, without the need to sacrifice efficiency by compensating the security loss of a reduction with larger parameters. However, for many important cryptographic primitives, including digital signatures and authenticated key exchange (AKE), we are still lacking constructions that are suitable for real-world deployment.

We construct the first truly practical signature scheme with tight security in a real-world multi-user setting with adaptive corruptions. The scheme is based on a new way of applying the Fiat-Shamir approach to construct tightly-secure signatures from certain identification schemes.

Then we use this scheme as a building block to construct the first practical AKE protocol with tight security. It allows the establishment of a key within 1 RTT in a practical client-server setting, provides forward security, is simple and easy to implement, and thus very suitable for practical deployment. It is essentially the ``signed Diffie-Hellman'' protocol, but with an additional message, which is crucial to achieve tight security. This additional message is used to overcome a technical difficulty in constructing tightly-secure AKE protocols.

For a theoretically-sound choice of parameters and a moderate number of users and sessions, our protocol has comparable computational efficiency to the simple signed Diffie-Hellman protocol with EC-DSA, while for large-scale settings our protocol has even better computational performance, at moderately increased communication complexity.
ePrint Report Continuously Non-Malleable Codes in the Split-State Model from Minimal Assumptions Rafail Ostrovsky, Giuseppe Persiano, Daniele Venturi, Ivan Visconti
At ICS 2010, Dziembowski, Pietrzak and Wichs introduced the notion of *non-malleable codes*, a weaker form of error-correcting codes guaranteeing that the decoding of a tampered codeword either corresponds to the original message or to an unrelated value. The last few years established non-malleable codes as one of the recently invented cryptographic primitives with the highest impact and potential, with very challenging open problems and applications.

In this work, we focus on so-called *continuously* non-malleable codes in the split-state model, as proposed by Faust et al. (TCC 2014), where a codeword is made of two shares and an adaptive adversary makes a polynomial number of attempts in order to tamper the target codeword, where each attempt is allowed to modify the two shares independently (yet arbitrarily). Achieving continuous non-malleability in the split-state model has been so far very hard. Indeed, the only known constructions require strong setup assumptions (i.e., the existence of a common reference string) and strong complexity-theoretic assumptions (i.e., the existence of non-interactive zero-knowledge proofs and collision-resistant hash functions).

As our main result, we construct a continuously non-malleable code in the split-state model without setup assumptions, requiring only one-to-one one-way functions (i.e., essentially optimal computational assumptions). Our result introduces several new ideas that make progress towards understanding continuous non-malleability, and shows interesting connections with protocol-design and proof-approach techniques used in other contexts (e.g., look-ahead simulation in zero-knowledge proofs, non-malleable commitments, and leakage resilience).
ePrint Report Generic Attacks against Beyond-Birthday-Bound MACs Gaëtan Leurent, Mridul Nandi, Ferdinand Sibleyras
In this work, we study the security of several recent MAC constructions with provable security beyond the birthday bound. We consider block-cipher based constructions with a double-block internal state, such as SUM-ECBC, PMAC+, 3kf9, GCM-SIV2, and some variants (LightMAC+, 1kPMAC+). All these MACs have a security proof up to $2^{2n/3}$ queries, but there are no known attacks with less than $2^{n}$ queries.

We describe a new cryptanalysis technique for double-block MACs based on finding quadruples of messages with four pairwise collisions in halves of the state. We show how to detect such quadruples in SUM-ECBC, PMAC+, 3kf9, GCM-SIV2 and their variants with $\mathcal{O}(2^{3n/4})$ queries, and how to build a forgery attack with the same query complexity. The time complexity of these attacks is above $2^n$, but it shows that the schemes do not reach full security in the information theoretic model. Surprisingly, our attack on LightMAC+ also invalidates a recent security proof by Naito.

Moreover, we give a variant of the attack against SUM-ECBC and GCM-SIV2 with time and data complexity $\tilde{\mathcal{O}}(2^{6n/7})$. As far as we know, this is the first attack with complexity below $2^n$ against a deterministic beyond-birthday-bound secure MAC.

As a side result, we also give a birthday attack against 1kf9, a single-key variant of 3kf9 that was withdrawn due to issues with the proof.
ePrint Report Must the Communication Graph of MPC Protocols be an Expander? Elette Boyle, Ran Cohen, Deepesh Data, Pavel Hubacek
Secure multiparty computation (MPC) on incomplete communication networks has been studied within two primary models: (1) Where a partial network is fixed a priori, and thus corruptions can occur dependent on its structure, and (2) Where edges in the communication graph are determined dynamically as part of the protocol. Whereas a rich literature has succeeded in mapping out the feasibility and limitations of graph structures supporting secure computation in the fixed-graph model (including strong classical lower bounds), these bounds do not apply in the latter dynamic-graph setting, which has recently seen exciting new results, but remains relatively unexplored.

In this work, we initiate a similar foundational study of MPC within the dynamic-graph model. As a first step, we investigate the property of graph expansion. All existing protocols (implicitly or explicitly) yield communication graphs which are expanders, but it is not clear whether this is inherent.

Our results consist of two types (for constant fraction of corruptions):

* Upper bounds: We demonstrate secure protocols whose induced communication graphs are not expander graphs, within a wide range of settings (computational, information theoretic, with low locality, even with low locality and adaptive security) each assuming some form of input-independent setup.

* Lower bounds: In the setting without setup and adaptive corruptions, we demonstrate that for certain functionalities, no protocol can maintain a non-expanding communication graph against all adversarial strategies. Our lower bound relies only on protocol correctness (not privacy), and requires a surprisingly delicate argument.

More generally, we provide a formal framework for analyzing the evolving communication graph of MPC protocols, giving a starting point for studying the relation between secure computation and further, more general graph properties.
This note was originally written under the name ``On the Security of HMFEv'' and was submitted to PQCrypto 2018. The author was informed by the referees of his oversight of an eprint work of the same name by Hashimoto, see eprint article /2017/689/, that completely breaks HMFEv, rendering the result on HMFEv obsolete. Still, the author feels that the technique used here is interesting and that, at least in principal, this method could contribute to future cryptanalysis. Thus, with a change of title indicating the direction in which this work is leading, we present the original work with all of its oversights intact and with minimal correction (only references fixed).

At PQCRYPTO 2017, a new multivariate digital signature based on Multi-HFE and utilizing the vinegar modifier was proposed. The vinegar modifier increases the Q-rank of the central map, preventing a direct application of the MinRank attack that defeated Multi-HFE. The authors were, therefore, confident enough to choose aggressive parameters for the Multi-HFE component of the central map (with vinegar variables fixed). Their analysis indicated that the security of the scheme depends on the sum of the number of variables $k$ over the extension field and the number $v$ of vinegar variables with the individual values being unimportant as long as they are not ``too small.'' We analyze the consequences of this choice of parameters and derive some new attacks showing that the parameter $v$ must be chosen with care.
Non-malleable codes were introduced by Dziembowski, Pietrzak and Wichs (ICS '10) and its main application is the protection of cryptographic devices against tampering attacks on memory. In this work, we initiate a comprehensive study on non-malleable codes for the class of partial functions, that read/write on an arbitrary subset of codeword bits with specific cardinality. Our constructions are efficient in terms of information rate, while allowing the attacker to access asymptotically almost the entire codeword. In addition, they satisfy a notion which is stronger than non-malleability, that we call non-malleability with manipulation detection, guaranteeing that any modified codeword decodes to either the original message or to $\bot$. Finally, our primitive implies All-Or-Nothing Transforms (AONTs) and as a result our constructions yield efficient AONTs under standard assumptions (only one-way functions), which, to the best of our knowledge, was an open question until now. In addition to this, we present a number of additional applications of our primitive in tamper resilience.
CSIDH is a recent proposal by Castryck, Lange, Martindale, Panny and Renes for post-quantum non-interactive key-exchange. It is similar in design to a scheme by Couveignes, Rostovtsev and Stolbunov, but it replaces ordinary elliptic curves by supersingular elliptic curves, in order to make significant gains in time and key lengths.

Isogeny-based key-exchange on ordinary elliptic curves can be targeted by a quantum subexponential hidden shift algorithm found by Childs, Jao and Soukharev. Although CSIDH uses supersingular curves, it is analog to the case of ordinary curves, hence this algorithm applies.

In the proposal, the authors suggest a choice of parameters that should ensure security against this.

In this paper, we show that those security parameters were too optimistic. Our result relies on two steps: first, we give a more precise complexity analysis of the hidden shift algorithm in this context, which greatly reduces the number of group actions to compute; second, we show how to compute efficiently this group action.

Our computations show, for example, that the 128-bit classical, 64-bit quantum security parameters proposed actually offer at most $37$ bits of quantum security.

Finally, we extend our analysis to ordinary isogeny computations, and show that an instance proposed by De Feo, Kieffer and Smith and expected to offer $56$ bits of quantum security offers at most $41$ bits of quantum security.
In this paper, we propose a new assumption, the Computational Learning With Rounding over rings, which is inspired by the computational Diffie-Hellman problem. Assuming the hardness of ring-LWE, we prove this problem is hard when the secret is small, uniform and invertible. From a theoretical point of view, we give examples of a key exchange scheme and a public key encryption scheme, and prove the worst-case hardness for both schemes with the help of a random oracle. Our result improves both speed, as a result of not requiring Gaussian secret or noise, and size, as a result of rounding. In practice, our result suggests that decisional ring-LWR based schemes, such as Saber, Round2 and Lizard, which are among the most efficient solutions to the NIST post-quantum cryptography competition,stem from a provable secure design. There are no hardness results on the decisional ring-LWR with polynomial modulus prior to this work, to the best of our knowledge.
ePrint Report Monero - Privacy in the Blockchain Kurt M. Alonso, Jordi Herrera Joancomartí
Monero is a cryptocurrency that aims to provide transaction confidentiality and untraceability to its users. This is achieved through various cryptographic schemes, which allow concealing amounts and parties in transactions.

Unfortunately, there has been a severe lack of literature describing with sufficient precision the cryptographic schemes at hand.

With this report we aim at bridging this gap, by supplying detailed descriptions of the algorithms used in Monero to warrant privacy to its users. Furthermore, we do so minimizing the use of technicisms, so as to appeal to a broad readership.
ePrint Report Cryptographic Constructions Supporting Implicit Data Integrity Michael Kounavis, David Durham, Sergej Deutsch, Antonios Papadimitriou, Amitabh Das
We study a new methodology supporting data integrity called 'implicit integrity' and present cryptographic constructions supporting it. Implicit integrity allows for corruption detection without producing, storing or verifying mathematical summaries of the content such as MACs, ICVs etc. The main idea behind this methodology is that, whereas typical user data demonstrate patterns such as repeated bytes or words, decrypted data resulting from corrupted ciphertexts no longer demonstrate such patterns. Thus, by checking the entropy of some decrypted ciphertexts, corruption can be possibly detected.

We discuss that implicit integrity can be associated with a notion of security which is different from the typical requirement that the output of cryptographic systems should be indistinguishable from the output of a random permutation. The notion of security we discuss is that it should be computationally difficult for an adversary to corrupt some ciphertext so that the resulting plaintext demonstrates specific patterns. We further introduce two kinds of adversaries. First, an input perturbing adversary performs content corruption attacks. Second an oracle replacing adversary performs content replay attacks. We discuss requirements for supporting implicit integrity in these two adversary models, and provide security bounds for a construction called IVP, a three-level confusion diffusion network which can support implicit integrity and is inexpensive to implement.
We present a quantum polynomial time attack against the GMMSSZ branching program obfuscator of Garg et al. (TCC'16), when instantiated with the GGH13 multilinear map of Garg et al. (EUROCRYPT'13). This candidate obfuscator was proved secure in the weak multilinear map model introduced by Miles et al. (CRYPTO'16). Our attack uses the short principal ideal solver of Cramer et al. (EUROCRYPT'16), to recover a secret element of the GGH13 multilinear map in quantum polynomial time. We then use this secret element to mount a (classical) polynomial time mixed-input attack against the GMMSSZ obfuscator. The main result of this article can hence be seen as a classical reduction from the security of the GMMSSZ obfuscator to the short principal ideal problem (the quantum setting is then only used to solve this problem in polynomial time).

As an additional contribution, we explain how the same ideas can be adapted to mount a quantum polynomial time attack against the DGGMM obfuscator of D\"ottling et al. (ePrint 2016), which was also proved secure in the weak multilinear map model.
ePrint Report Ring packing and amortized FHEW bootstrapping Daniele Micciancio, Jessica Sorrell
The FHEW fully homomorphic encryption scheme (Ducas and Micciancio, Eurocrypt 2015) offers very fast homomorphic NAND-gate computations (on encrypted data) and a relatively fast refreshing procedure that allows to homomorphically evaluate arbitrary NAND boolean circuits. Unfortunately, the refreshing procedure needs to be executed after every single NAND computation, and each refreshing operates on a single encrypted bit, greatly decreasing the overall throughput of the scheme. We give a new refreshing procedure that simultaneously refreshes $n$ FHEW ciphertexts, at a cost comparable to a single-bit FHEW refreshing operation. As a result, the cost of each refreshing is amortized over $n$ encrypted bits, improving the throughput for the homomorphic evaluation of boolean circuits roughly by a factor $n$.
ePrint Report Polynomial direct sum masking to protect against both SCA and FIA Claude Carlet, Abderrahman Daif, Sylvain Guilley, Cédric Tavernier
Side Channel Attacks (SCA) and Fault Injection Attacks (FIA) allow an opponent to have partial access to the internal behavior of the hardware. Since the end of the nineties, many works have shown that this type of attacks constitute a serious threat to cryptosystems implemented in embedded devices. In the state of the art, there exist several countermeasures to protect symmetric encryption (especially AES-128). Most of them protect only against one of these two attacks (SCA or FIA). A method called ODSM has been proposed to withstand SCA and FIA , but its implementation in the whole algorithm is a big open problem when no particular hardware protection is possible. In the present paper, we propose a practical masking scheme specifying ODSM which makes it possible to protect the symmetric encryption against these two attacks.
We construct a two-message oblivious transfer (OT) protocol without setup that guarantees statistical privacy for the sender even against malicious receivers. Receiver privacy is game based and relies on the hardness of learning with errors (LWE). This flavor of OT has been a central building block for minimizing the round complexity of witness indistinguishable and zero knowledge proof systems and multi-party computation protocols, as well as for achieving circuit privacy for homomorphic encryption in the malicious setting. Prior to this work, all candidates in the literature from standard assumptions relied on number theoretic assumptions and were thus insecure in the post-quantum setting. This work provides the first (presumed) post-quantum secure candidate and thus allows to instantiate the aforementioned applications in a post-quantum secure manner.

Technically, we rely on the transference principle: Either a lattice or its dual must have short vectors. Short vectors, in turn, can be translated to information loss in encryption. Thus encrypting one message with respect to the lattice and one with respect to its dual guarantees that at least one of them will be statistically hidden.

newer items   older items