IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
04 June 2018
Gilles Barthe, Sonia Belaïd, Pierre-Alain Fouque, Benjamin Grégoire
Masking is a popular countermeasure for protecting both hardware and
software implementations against differential power analysis. A main
strength of software masking is that its security guarantees can be
captured formally through well-established models. The existence of
such models, and their relation with probabilistic information flow
studied in formal methods, has been instrumental to the emergence of
fully automated methods for analyzing masked implementations. In
particular, state-of-the-art tools such as maskVerif (Barthe
et al., EUROCRYPT 2015), have been used successfully for
analyzing masked implementations at high orders. In contrast, security
models for hardware implementations have remained somewhat less
developed, and no prior verification tool has accommodated
hardware-specific sources of vulnerabilities such as glitches.
Recently, Bloem et al. formalize the security of
masked hardware implementations against glitches and give a method
based on SAT solvers for verifying security automatically. However,
their method works for small functionalities and low orders.
In this paper, we extend maskVerif tool (Barthe et al., EUROCRYPT 2015) with a unified framework to efficiently and formally verify both software and hardware implementations. In this process, we introduce a simple but expressive intermediate language. Our representation requires that each instruction is instrumented with leakage expressions that may depend on the expressions that arise in the instruction and on previous computation. Despite its simplicity, our intermediate representation covers a broad range of models from the literature; moreover, it is also easily amenable to efficient formal verification.
Our results significantly improve over prior work, both in terms of coverage and efficiency. In particular, we demonstrate that our tool is able to analyze examples from (Bloem et al, EUROCRYPT 2018) much faster and at high orders.
In this paper, we extend maskVerif tool (Barthe et al., EUROCRYPT 2015) with a unified framework to efficiently and formally verify both software and hardware implementations. In this process, we introduce a simple but expressive intermediate language. Our representation requires that each instruction is instrumented with leakage expressions that may depend on the expressions that arise in the instruction and on previous computation. Despite its simplicity, our intermediate representation covers a broad range of models from the literature; moreover, it is also easily amenable to efficient formal verification.
Our results significantly improve over prior work, both in terms of coverage and efficiency. In particular, we demonstrate that our tool is able to analyze examples from (Bloem et al, EUROCRYPT 2018) much faster and at high orders.
Emmanuelle Anceaume, Antonella Del Pozzo, Romaric Ludinard, Maria Potop-Butucaru, Sara Tucci-Piergiovanni
The presented work continues the line of recent distributed computing community efforts dedicated to the theoretical aspects of blockchains. This paper is the first to specify blockchains as a composition of abstract data types all together with a hierarchy of consistency criteria that formally characterizes the histories admissible for distributed programs that use them. Our work is based on an original oracle-based construction that, along with new consistency definitions, captures the eventual convergence process in blockchain systems. The paper presents as well some results on implementability of the presented abstractions and a mapping of representative existing blockchains from both academia and industry in our framework.
Carsten Baum, Jonathan Bootle, Andrea Cerulli, Rafael del Pino, Jens Groth, Vadim Lyubashevsky
We propose the first zero-knowledge argument with sub-linear communication complexity for arithmetic circuit satisfiability over a prime $p$ whose security is based on the hardness of the short integer solution (SIS) problem. For a circuit with $N$ gates, the communication complexity of our protocol is $O\left(\sqrt{N\lambda\log^3{N}}\right)$, where $\lambda$ is the security parameter. A key component of our construction is a surprisingly simple zero-knowledge proof for pre-images of linear relations whose amortized communication complexity depends only logarithmically on the number of relations being proved. This latter protocol is a substantial improvement, both theoretically and in practice, over the previous results in this line of research of Damg{\aa}rd et al. (CRYPTO 2012), Baum et al. (CRYPTO 2016), Cramer et al. (EUROCRYPT 2017) and del Pino and Lyubashevsky (CRYPTO 2017), and we believe it to be of independent interest.
Marshall Ball, Alon Rosen, Manuel Sabin, Prashant Nalini Vasudevan
We give Proofs of Work (PoWs) whose hardness is based on well-studied worst-case assumptions from fine-grained complexity theory. This extends the work of (Ball et al., STOC '17), that presents PoWs that are based on the Orthogonal Vectors, 3SUM, and All-Pairs Shortest Path problems. These, however, were presented as a `proof of concept' of provably secure PoWs and did not fully meet the requirements of a conventional PoW: namely, it was not shown that multiple proofs could not be generated faster than generating each individually. We use the considerable algebraic structure of these PoWs to prove that this non-amortizability of multiple proofs does in fact hold and further show that the PoWs' structure can be exploited in ways previous heuristic PoWs could not.
This creates full PoWs that are provably hard from worst-case assumptions (previously, PoWs were either only based on heuristic assumptions or on much stronger cryptographic assumptions (Bitansky et al., ITCS '16)) while still retaining significant structure to enable extra properties of our PoWs. Namely, we show that the PoWs of (Ball et al, STOC '17) can be modified to have much faster verification time, can be proved in zero knowledge, and more.
Finally, as our PoWs are based on evaluating low-degree polynomials originating from average-case fine-grained complexity, we prove an average-case direct sum theorem for the problem of evaluating these polynomials, which may be of independent interest. For our context, this implies the required non-amortizability of our PoWs.
This creates full PoWs that are provably hard from worst-case assumptions (previously, PoWs were either only based on heuristic assumptions or on much stronger cryptographic assumptions (Bitansky et al., ITCS '16)) while still retaining significant structure to enable extra properties of our PoWs. Namely, we show that the PoWs of (Ball et al, STOC '17) can be modified to have much faster verification time, can be proved in zero knowledge, and more.
Finally, as our PoWs are based on evaluating low-degree polynomials originating from average-case fine-grained complexity, we prove an average-case direct sum theorem for the problem of evaluating these polynomials, which may be of independent interest. For our context, this implies the required non-amortizability of our PoWs.
Phillip Rogaway, Yusi Zhang
Often the simplest way of specifying game-based cryptographic definitions
is apparently barred because the adversary would have some trivial win.
Disallowing or invalidating these wins can
lead to complex or unconvincing definitions.
We suggest a generic way around this difficulty.
We call it indistinguishability up to correctness, or IND|C.
Given games G and H
and a correctness condition C
we define an advantage measure Adv_{G,H,C}^indc wherein
G/H distinguishing attacks are effaced
to the extent that they are inevitable due to C.
We formalize this in the language of oracle silencing,
an alternative to exclusion-style and penalty-style definitions.
We apply our ideas to a domain where game-based definitions have
been cumbersome: stateful authenticated-encryption (sAE).
We rework existing sAE notions and encompass new ones,
like replay-free AE permitting a specified degree of out-of-order message delivery.
Shashank Agrawal, Chaya Ganesh, Payman Mohassel
The two most common ways to design non-interactive zero-knowledge (NIZK) proofs are based on Sigma protocols and QAP-based SNARKs. The former is highly efficient for proving algebraic statements while the latter is superior for arithmetic representations.
Motivated by applications such as privacy-preserving credentials and privacy-preserving audits in cryptocurrencies, we study the design of NIZKs for composite statements that compose algebraic and arithmetic statements in arbitrary ways. Specifically, we provide a framework for proving statements that consist of ANDs, ORs and function compositions of a mix of algebraic and arithmetic components. This allows us to explore the full spectrum of trade-offs between proof size, prover cost, and CRS size/generation cost. This leads to proofs for statements of the form: knowledge of $x$ such that $SHA(g^x)=y$ for some public $y$ where the prover's work is 500 times fewer exponentiations compared to a QAP-based SNARK at the cost of increasing the proof size to 2404 group and field elements. In application to anonymous credentials, our techniques result in 8 times fewer exponentiations for the prover at the cost of increasing the proof size to 298 elements.
Motivated by applications such as privacy-preserving credentials and privacy-preserving audits in cryptocurrencies, we study the design of NIZKs for composite statements that compose algebraic and arithmetic statements in arbitrary ways. Specifically, we provide a framework for proving statements that consist of ANDs, ORs and function compositions of a mix of algebraic and arithmetic components. This allows us to explore the full spectrum of trade-offs between proof size, prover cost, and CRS size/generation cost. This leads to proofs for statements of the form: knowledge of $x$ such that $SHA(g^x)=y$ for some public $y$ where the prover's work is 500 times fewer exponentiations compared to a QAP-based SNARK at the cost of increasing the proof size to 2404 group and field elements. In application to anonymous credentials, our techniques result in 8 times fewer exponentiations for the prover at the cost of increasing the proof size to 298 elements.
Viet Tung Hoang, Stefano Tessaro, Ni Trieu
Format-preserving encryption (FPE) produces ciphertexts which have the
same format as the plaintexts. Building secure FPE is very challenging,
and recent attacks (Bellare, Hoang, Tessaro, CCS '16; Durak and
Vaudenay, CRYPTO '17) have highlighted security deficiencies in the
recent NIST SP800-38G standard. This has left the question open of
whether practical schemes with high security exist.
In this paper, we continue the investigation of attacks against FPE schemes. Our first contribution are new known-plaintext message recovery attacks against Feistel-based FPEs (such as FF1/FF3 from the NIST SP800-38G standard) which improve upon previous work in terms of amortized complexity in multi-target scenarios, where multiple ciphertexts are to be decrypted. Our attacks are also qualitatively better in that they make no assumptions on the correlation between the targets to be decrypted and the known plaintexts. We also surface a new vulnerability specific to FF3 and how it handles odd length domains, which leads to a substantial speedup in our attacks.
We also show the first attacks against non-Feistel based FPEs. Specifically, we show a strong message-recovery attack for FNR, a construction proposed by Cisco which replaces two rounds in the Feistel construction with a pairwise-independent permutation, following the paradigm by Naor and Reingold (JoC, '99). We also provide a strong ciphertext-only attack against a variant of the DTP construction by Brightwell and Smith, which is deployed by Protegrity within commercial applications.
All of our attacks show that existing constructions fall short of achieving desirable security levels. For Feistel and the FNR schemes, our attacks become feasible on small domains, e.g., 8 bits, for suggested round numbers. Our attack against the DTP construction is practical even for large domains. We provide proof-of-concept implementations of our attacks that verify our theoretical findings.
In this paper, we continue the investigation of attacks against FPE schemes. Our first contribution are new known-plaintext message recovery attacks against Feistel-based FPEs (such as FF1/FF3 from the NIST SP800-38G standard) which improve upon previous work in terms of amortized complexity in multi-target scenarios, where multiple ciphertexts are to be decrypted. Our attacks are also qualitatively better in that they make no assumptions on the correlation between the targets to be decrypted and the known plaintexts. We also surface a new vulnerability specific to FF3 and how it handles odd length domains, which leads to a substantial speedup in our attacks.
We also show the first attacks against non-Feistel based FPEs. Specifically, we show a strong message-recovery attack for FNR, a construction proposed by Cisco which replaces two rounds in the Feistel construction with a pairwise-independent permutation, following the paradigm by Naor and Reingold (JoC, '99). We also provide a strong ciphertext-only attack against a variant of the DTP construction by Brightwell and Smith, which is deployed by Protegrity within commercial applications.
All of our attacks show that existing constructions fall short of achieving desirable security levels. For Feistel and the FNR schemes, our attacks become feasible on small domains, e.g., 8 bits, for suggested round numbers. Our attack against the DTP construction is practical even for large domains. We provide proof-of-concept implementations of our attacks that verify our theoretical findings.
Sanjam Garg, Mohammad Hajiabadi, Mohammad Mahmoody, Ameer Mohammed
Understanding whether public-key encryption can be based on one-way functions is a fundamental open problem in cryptography. The seminal work of Impagliazzo and Rudich [STOC'89] shows that black-box constructions of public-key encryption from one-way functions are impossible. However, this impossibility result leaves open the possibility of using non-black-box techniques for achieving this goal.
One of the most powerful classes of non-black-box techniques, which can be based on one-way functions (OWFs) alone, is Yao's garbled circuit technique [FOCS'86]. As for the non-black-box power of this technique, the recent work of Dottling and Garg [CRYPTO'17] shows that the use of garbling allows us to circumvent known black-box barriers in the context of identity-based encryption.
We prove that garbling of circuits that have OWF (or even random oracle) gates in them are insufficient for obtaining public-key encryption. Additionally, we show that this model also captures (non-interactive) zero-knowledge proofs for relations with OWF gates. This indicates that currently known OWF-based non-black-box techniques are perhaps insufficient for realizing public-key encryption.
One of the most powerful classes of non-black-box techniques, which can be based on one-way functions (OWFs) alone, is Yao's garbled circuit technique [FOCS'86]. As for the non-black-box power of this technique, the recent work of Dottling and Garg [CRYPTO'17] shows that the use of garbling allows us to circumvent known black-box barriers in the context of identity-based encryption.
We prove that garbling of circuits that have OWF (or even random oracle) gates in them are insufficient for obtaining public-key encryption. Additionally, we show that this model also captures (non-interactive) zero-knowledge proofs for relations with OWF gates. This indicates that currently known OWF-based non-black-box techniques are perhaps insufficient for realizing public-key encryption.
Gustavo Banegas, Ricardo Custódio, Daniel Panario
We introduce a new class of irreducible pentanomials over $\mathbb{F}_2$ of the form $f(x) = x^{2b+c} + x^{b+c} + x^b + x^c + 1$. Let $m=2b+c$ and use $f$ to define the finite field extension of degree $m$. We give the exact number of operations required for computing the reduction modulo $f$. We also provide a multiplier based on Karatsuba algorithm in $\mathbb{F}_2[x]$ combined with our reduction process. We give the total cost of the multiplier and found that the bit-parallel multiplier defined by this new class of polynomials has improved XOR and AND complexity. Our multiplier has comparable time delay when compared to other multipliers based on Karatsuba algorithm.
Joseph Jaeger, Igors Stepanovs
We aim to understand the best possible security of a (bidirectional) cryptographic channel against an adversary that may arbitrarily and repeatedly learn the secret state of either communicating party. We give a formal security definition and a proven-secure construction. This construction provides better security against state compromise than the Signal Double Ratchet Algorithm or any other known channel construction. To facilitate this we define and construct new forms of public-key encryption and digital signatures that update their keys over time.
Gilad Asharov, Naomi Ephraim, Ilan Komargodski, Rafael Pass
Indistinguishability obfuscation has become one of the most exciting cryptographic primitives due to its far reaching applications in cryptography and other fields. However, to date, obtaining a plausibly secure construction has been an illusive task, thus motivating the study of seemingly weaker primitives that imply it, with the possibility that they will be easier to construct.
In this work, we provide a systematic study of compressing obfuscation, one of the most natural and simple to describe primitives that is known to imply indistinguishability obfuscation when combined with other standard assumptions. A compressing obfuscator is roughly an indistinguishability obfuscator that outputs just a slightly compressed encoding of the truth table. This generalizes notions introduced by Lin et al. (PKC 2016) and Bitansky et al. (TCC 2016) by allowing for a broader regime of parameters.
We view compressing obfuscation as an independent cryptographic primitive and show various positive and negative results concerning its power and plausibility of existence, demonstrating significant differences from full-fledged indistinguishability obfuscation.
First, we show that as a cryptographic building block, compressing obfuscation is weak. In particular, when combined with one-way functions, it cannot be used (in a black-box way) to achieve public-key encryption, even under (sub-)exponential security assumptions. This is in sharp contrast to indistinguishability obfuscation, which together with one-way functions implies almost all cryptographic primitives.
Second, we show that to construct compressing obfuscation with perfect correctness, one only needs to assume its existence with a very weak correctness guarantee and polynomial hardness. Namely, we show a correctness amplification transformation with optimal parameters that relies only on polynomial hardness assumptions. This implies a universal construction assuming only polynomially secure compressing obfuscation with approximate correctness. In the context of indistinguishability obfuscation, we know how to achieve such a result only under sub-exponential security assumptions together with derandomization assumptions.
Lastly, we characterize the existence of compressing obfuscation with statistical security. We show that in some range of parameters and for some classes of circuits such an obfuscator exists, whereas it is unlikely to exist with better parameters or for larger classes of circuits. These positive and negative results reveal a deep connection between compressing obfuscation and various concepts in complexity theory and learning theory.
In this work, we provide a systematic study of compressing obfuscation, one of the most natural and simple to describe primitives that is known to imply indistinguishability obfuscation when combined with other standard assumptions. A compressing obfuscator is roughly an indistinguishability obfuscator that outputs just a slightly compressed encoding of the truth table. This generalizes notions introduced by Lin et al. (PKC 2016) and Bitansky et al. (TCC 2016) by allowing for a broader regime of parameters.
We view compressing obfuscation as an independent cryptographic primitive and show various positive and negative results concerning its power and plausibility of existence, demonstrating significant differences from full-fledged indistinguishability obfuscation.
First, we show that as a cryptographic building block, compressing obfuscation is weak. In particular, when combined with one-way functions, it cannot be used (in a black-box way) to achieve public-key encryption, even under (sub-)exponential security assumptions. This is in sharp contrast to indistinguishability obfuscation, which together with one-way functions implies almost all cryptographic primitives.
Second, we show that to construct compressing obfuscation with perfect correctness, one only needs to assume its existence with a very weak correctness guarantee and polynomial hardness. Namely, we show a correctness amplification transformation with optimal parameters that relies only on polynomial hardness assumptions. This implies a universal construction assuming only polynomially secure compressing obfuscation with approximate correctness. In the context of indistinguishability obfuscation, we know how to achieve such a result only under sub-exponential security assumptions together with derandomization assumptions.
Lastly, we characterize the existence of compressing obfuscation with statistical security. We show that in some range of parameters and for some classes of circuits such an obfuscator exists, whereas it is unlikely to exist with better parameters or for larger classes of circuits. These positive and negative results reveal a deep connection between compressing obfuscation and various concepts in complexity theory and learning theory.
Seny Kamara, Tarik Moataz, Olga Ohrimenko
Structured encryption (STE) schemes encrypt data structures in such a way that they can be privately queried. One aspect of STE that is still poorly understood is its leakage. In this work, we describe a general framework to design STE schemes that do not leak the query/search pattern (i.e., if and when a query was previously made).
Our framework consists of two compilers. The first can be used to make any dynamic STE scheme rebuildable in the sense that the encrypted structures it produces can be rebuilt efficiently using only O(1) client storage. The second transforms any rebuildable scheme that leaks the query/search pattern into a new scheme that does not. Our second compiler is a generalization of Goldreich and Ostrovsky's square root oblivious RAM (ORAM) solution but does not make use of black-box ORAM simulation. We show that our framework produces STE schemes with query complexity that is asymptotically better than ORAM simulation in certain (natural) settings and comparable to special-purpose oblivious data structures.
We use our framework to design a new STE scheme that is ``almost" zero-leakage in the sense that it reveals an, intuitively-speaking, small amount of information. We also show how the scheme can be used to achieve zero-leakage queries when one can tolerate a probabilistic guarantee of correctness. This construction results from applying our compilers to a new STE scheme we design called the piggyback scheme. This scheme is a general-purpose STE construction (in the sense that it can encrypt any data structure) that leaks the search/query pattern but hides the response length on non-repeating queries.
Our framework consists of two compilers. The first can be used to make any dynamic STE scheme rebuildable in the sense that the encrypted structures it produces can be rebuilt efficiently using only O(1) client storage. The second transforms any rebuildable scheme that leaks the query/search pattern into a new scheme that does not. Our second compiler is a generalization of Goldreich and Ostrovsky's square root oblivious RAM (ORAM) solution but does not make use of black-box ORAM simulation. We show that our framework produces STE schemes with query complexity that is asymptotically better than ORAM simulation in certain (natural) settings and comparable to special-purpose oblivious data structures.
We use our framework to design a new STE scheme that is ``almost" zero-leakage in the sense that it reveals an, intuitively-speaking, small amount of information. We also show how the scheme can be used to achieve zero-leakage queries when one can tolerate a probabilistic guarantee of correctness. This construction results from applying our compilers to a new STE scheme we design called the piggyback scheme. This scheme is a general-purpose STE construction (in the sense that it can encrypt any data structure) that leaks the search/query pattern but hides the response length on non-repeating queries.
Liron David, Avishai Wool
Rank estimation is an important tool for a side-channel evaluations laboratories. It allows estimating the remaining security after an attack has been performed, quantified as the time complexity and the memory consumption required to brute force the key given the leakages as probability distributions over $d$ subkeys (usually key bytes). These estimations are particularly useful where the key is not reachable with exhaustive search. We propose a new method called PRank for rank estimation, that is conceptually simple, and more time and memory efficient than previous proposals. Our main idea is to bound each subkey distribution by a Pareto-like function: since these are analytical functions, we can then estimate the rank by a closed formula. We evaluated the performance of PRank through extensive simulations based on two real SCA data corpora, and compared it to the currently-best histogram-based algorithm. We show that PRank gives a good rank estimation with much improved time and memory efficiency, especially for large ranks: For ranks between $2^{80}-2^{100}$ PRank estimation is at most 10 bits above the histogram rank and for ranks beyond $2^{100}$ the PRank estimation is only 4 bits above the histogram rank---yet it runs faster, and uses negligible memory. PRank gives a new and interesting method to solve the rank estimation problem based on reduction to analytical functions and calculating one closed formula hence using negligible time and space.
Sanjam Garg, Rafail Ostrovsky, Akshayaram Srinivasan
We give a construction of an adaptive garbled RAM scheme. In the adaptive
setting, a client first
garbles a ``large'' persistent database which is stored on a server. Next, the
client can
provide multiple adaptively and adversarially chosen RAM garbled programs that
execute and modify the stored
database arbitrarily. The garbled database and the garbled program should
reveal
nothing more than the running time and the output of the computation.
Furthermore, the sizes of the garbled database and
the garbled program grow only linearly in the size of the database and the
running time of the
executed program respectively (up to polylogarithmic factors). The security of
our construction is based on the assumption that
laconic oblivious transfer (Cho et al., CRYPTO 2017) exists. Previously, such
adaptive garbled RAM constructions were only known using indistinguishability
obfuscation or in random oracle model. As an additional application, we note
that
this work yields the first constant round secure computation protocol for
persistent RAM
programs in the malicious setting from
standard assumptions. Prior works did not support persistence in the malicious
setting.
Itay Berman, Akshay Degwekar, Ron D. Rothblum, Prashant Nalini Vasudevan
Since its inception, public-key encryption (PKE) has been one of the main cornerstones of cryptography. A central goal in cryptographic research is to understand the foundations of public-key encryption and in particular, base its existence on a natural and generic complexity-theoretic assumption. An intriguing candidate for such an assumption is the existence of a cryptographically hard language in the intersection of NP and SZK.
In this work we prove that public-key encryption can be based on the foregoing assumption, as long as the (honest) prover in the zero-knowledge protocol is efficient and laconic. That is, messages that the prover sends should be efficiently computable (given the NP witness) and short (i.e., of sufficiently sub-logarithmic length). Actually, our result is stronger and only requires the protocol to be zero-knowledge for an honest-verifier and sound against computationally bounded cheating provers.
Languages in NP with such laconic zero-knowledge protocols are known from a variety of computational assumptions (e.g., Quadratic Residuocity, Decisional Diffie-Hellman, Learning with Errors, etc.). Thus, our main result can also be viewed as giving a unifying framework for constructing PKE which, in particular, captures many of the assumptions that were already known to yield PKE.
We also show several extensions of our result. First, that a certain weakening of our assumption on laconic zero-knowledge is actually equivalent to PKE, thereby giving a complexity-theoretic characterization of PKE. Second, a mild strengthening of our assumption also yields a (2-message) oblivious transfer protocol.
In this work we prove that public-key encryption can be based on the foregoing assumption, as long as the (honest) prover in the zero-knowledge protocol is efficient and laconic. That is, messages that the prover sends should be efficiently computable (given the NP witness) and short (i.e., of sufficiently sub-logarithmic length). Actually, our result is stronger and only requires the protocol to be zero-knowledge for an honest-verifier and sound against computationally bounded cheating provers.
Languages in NP with such laconic zero-knowledge protocols are known from a variety of computational assumptions (e.g., Quadratic Residuocity, Decisional Diffie-Hellman, Learning with Errors, etc.). Thus, our main result can also be viewed as giving a unifying framework for constructing PKE which, in particular, captures many of the assumptions that were already known to yield PKE.
We also show several extensions of our result. First, that a certain weakening of our assumption on laconic zero-knowledge is actually equivalent to PKE, thereby giving a complexity-theoretic characterization of PKE. Second, a mild strengthening of our assumption also yields a (2-message) oblivious transfer protocol.
Manuel Barbosa, Pooya Farshim
We study Authenticated Encryption with Associated Data (AEAD) from the viewpoint of composition in arbitrary (single-stage) environments.
We use the indifferentiability framework to formalize the intuition that a good AEAD scheme should have random ciphertexts subject to decryptability.
Within this framework, we can then apply the indifferentiability composition theorem to show that such schemes offer extra safeguards wherever the relevant security properties are not known, or cannot be predicted in advance, as in general-purpose crypto libraries and standards.
We show, on the negative side, that generic composition (in many of its configurations) and well-known classical and recent schemes fail to achieve indifferentiability. On the positive side, we give a provably indifferentiable Feistel-based construction, which reduces the round complexity from at least $6$, needed for blockciphers, to only 3 for encryption. This result is not too far off the theoretical optimum as we give a lower bound that rules out the indifferentiability of any construction with less than 2 rounds.
We show, on the negative side, that generic composition (in many of its configurations) and well-known classical and recent schemes fail to achieve indifferentiability. On the positive side, we give a provably indifferentiable Feistel-based construction, which reduces the round complexity from at least $6$, needed for blockciphers, to only 3 for encryption. This result is not too far off the theoretical optimum as we give a lower bound that rules out the indifferentiability of any construction with less than 2 rounds.
Yoshinori Aono, Phong Q. Nguyen, Yixin Shen
Enumeration is a fundamental lattice algorithm used in challenge records.
We show how to speed up enumeration on a quantum computer,
which affects the security estimates of several lattice-based submissions to NIST:
if $T$ is the number of operations of enumeration,
our quantum enumeration runs in roughly $\sqrt{T}$ operations.
This applies to the two most efficient forms of enumeration known in the extreme pruning setting:
cylinder pruning but also discrete pruning introduced at Eurocrypt '17.
Our results are based on recent quantum tree algorithms by Montanaro and Ambainis-Kokainis.
The discrete pruning case requires a crucial tweak:
we modify the preprocessing so that the running time can be rigorously proved to be essentially optimal,
which was the main open problem in discrete pruning.
We also introduce another tweak to solve the more general problem of finding close lattice vectors.
Xiaolu Hou, Jakub Breier, Fuyuan Zhang, Yang Liu
Emerging technologies with the requirement of small size and portability, such as Internet-of-Things devices, represent a good target for physical attacks, e.g., fault attacks. These attacks often aim at revealing secrets used in cryptographic algorithms, which are the essential building block for communication protocols. Differential Fault Analysis (DFA) is considered as the most popular fault analysis method. While there are techniques that provide a fault analysis automation on the cipher level to some degree, it can be shown that when it comes to software implementations, there are new vulnerabilities, which cannot be found by observing the cipher design specification.
This work bridges the gap by providing a fully automated way to carry out DFA on assembly implementations of symmetric block ciphers. We use a customized data flow graph to represent the program and develop a novel fault analysis methodology to capture the program behavior under faults. We establish an effective description of DFA as constraints that are passed to an SMT solver. We create a tool that takes assembly code as input, analyzes the dependencies among instructions, automatically attacks vulnerable instructions using SMT solver and outputs the attack details that recover the last round key (and possibly the earlier keys). We support our design with evaluations on lightweight ciphers SIMON, SPECK, and PRIDE, and a current NIST standard, AES. By automated assembly analysis, we were able to find new efficient DFA attacks on SIMON, SPECK and PRIDE, exploiting implementation specific vulnerabilities, and a previously published DFA on AES. Moreover, we present a novel DFA on multiplication operation that has never been shown for symmetric block ciphers before. Our experimental evaluation also shows reasonable execution times that are scalable to current cipher designs and can easily outclass the manual analysis.
We note that this is the first work that automatically carries out DFA on cipher implementations without any plaintext or ciphertext information and therefore, can be generally applied to any input data to the cipher.
This work bridges the gap by providing a fully automated way to carry out DFA on assembly implementations of symmetric block ciphers. We use a customized data flow graph to represent the program and develop a novel fault analysis methodology to capture the program behavior under faults. We establish an effective description of DFA as constraints that are passed to an SMT solver. We create a tool that takes assembly code as input, analyzes the dependencies among instructions, automatically attacks vulnerable instructions using SMT solver and outputs the attack details that recover the last round key (and possibly the earlier keys). We support our design with evaluations on lightweight ciphers SIMON, SPECK, and PRIDE, and a current NIST standard, AES. By automated assembly analysis, we were able to find new efficient DFA attacks on SIMON, SPECK and PRIDE, exploiting implementation specific vulnerabilities, and a previously published DFA on AES. Moreover, we present a novel DFA on multiplication operation that has never been shown for symmetric block ciphers before. Our experimental evaluation also shows reasonable execution times that are scalable to current cipher designs and can easily outclass the manual analysis.
We note that this is the first work that automatically carries out DFA on cipher implementations without any plaintext or ciphertext information and therefore, can be generally applied to any input data to the cipher.
Zhengfeng Ji, Yi-Kai Liu, Fang Song
We propose the concept of pseudorandom quantum states, which appear random to any quantum polynomial-time adversary. This offers a computational approximation to perfect randomness on quantum states (analogous to a cryptographic pseudorandom generator), as apposed to some statistical notion of quantum pseudorandomness in the literature, such as quantum t-designs (analogous to t-wise independent distributions).
Under the assumption that quantum-secure one-way functions exist, we present efficient constructions of pseudorandom states, showing feasibility of our definition. We then prove several basic properties of any pseudorandom states, which further back up our definition. First, we show a cryptographic no-cloning theorem that no efficient quantum algorithm that can create additional copies from any polynomial-many copies of pseudorandom states. Second, as expected for random quantum states, we show that pseudorandom quantum states are highly entangled on average. Finally, as a main application, we prove that any family of pseudorandom states naturally gives rise to a private-key quantum money scheme, thanks to our cryptographic no-cloning theorem.
Under the assumption that quantum-secure one-way functions exist, we present efficient constructions of pseudorandom states, showing feasibility of our definition. We then prove several basic properties of any pseudorandom states, which further back up our definition. First, we show a cryptographic no-cloning theorem that no efficient quantum algorithm that can create additional copies from any polynomial-many copies of pseudorandom states. Second, as expected for random quantum states, we show that pseudorandom quantum states are highly entangled on average. Finally, as a main application, we prove that any family of pseudorandom states naturally gives rise to a private-key quantum money scheme, thanks to our cryptographic no-cloning theorem.
Kristian Gjøsteen, Tibor Jager
Tight security is increasingly gaining importance in real-world cryptography, as it allows to choose cryptographic parameters in a way that is supported by a security proof, without the need to sacrifice efficiency by compensating the security loss of a reduction with larger parameters. However, for many important cryptographic primitives, including digital signatures and authenticated key exchange (AKE), we are still lacking constructions that are suitable for real-world deployment.
We construct the first truly practical signature scheme with tight security in a real-world multi-user setting with adaptive corruptions. The scheme is based on a new way of applying the Fiat-Shamir approach to construct tightly-secure signatures from certain identification schemes.
Then we use this scheme as a building block to construct the first practical AKE protocol with tight security. It allows the establishment of a key within 1 RTT in a practical client-server setting, provides forward security, is simple and easy to implement, and thus very suitable for practical deployment. It is essentially the ``signed Diffie-Hellman'' protocol, but with an additional message, which is crucial to achieve tight security. This additional message is used to overcome a technical difficulty in constructing tightly-secure AKE protocols.
For a theoretically-sound choice of parameters and a moderate number of users and sessions, our protocol has comparable computational efficiency to the simple signed Diffie-Hellman protocol with EC-DSA, while for large-scale settings our protocol has even better computational performance, at moderately increased communication complexity.
We construct the first truly practical signature scheme with tight security in a real-world multi-user setting with adaptive corruptions. The scheme is based on a new way of applying the Fiat-Shamir approach to construct tightly-secure signatures from certain identification schemes.
Then we use this scheme as a building block to construct the first practical AKE protocol with tight security. It allows the establishment of a key within 1 RTT in a practical client-server setting, provides forward security, is simple and easy to implement, and thus very suitable for practical deployment. It is essentially the ``signed Diffie-Hellman'' protocol, but with an additional message, which is crucial to achieve tight security. This additional message is used to overcome a technical difficulty in constructing tightly-secure AKE protocols.
For a theoretically-sound choice of parameters and a moderate number of users and sessions, our protocol has comparable computational efficiency to the simple signed Diffie-Hellman protocol with EC-DSA, while for large-scale settings our protocol has even better computational performance, at moderately increased communication complexity.