International Association for Cryptologic Research

IACR News Central

Here you can see all recent updates to the IACR webpage. These updates are also available:

Now viewing news items related to:

30 March 2017
ePrint Report Implementation and Evaluation of Improved Gaussian Sampling for Lattice Trapdoors Kamil Doruk G\"{u}r, Yuriy Polyakov, Kurt Rohloff, Gerard W. Ryan, Erkay Sava\c{s}
We report on our implementation of a new Gaussian sampling algorithm for lattice trapdoors. Lattice trapdoors are used in a wide array of lattice-based cryptographic schemes including digital signatures, attributed-based encryption, program obfuscation and others. Our implementation provides Gaussian sampling for trapdoor lattices with prime moduli, and supports both single- and multi-threaded execution. We experimentally evaluate our implementation through its use in the GPV hash-and-sign digital signature scheme as a benchmark. We compare our design and implementation with prior work reported in the literature. Evaluation shows that our implementation 1) has smaller space requirements and faster runtime, 2) does not require multi-precision floating-point arithmetic, and 3) can be used for a broader range of cryptographic primitives than previous implementations.
Side-channel analysis attacks, particularly power analysis attacks, have become one of the major threats, that hardware designers have to deal with. To defeat them, the majority of the known concepts are based on either masking, hiding, or rekeying (or a combination of them). This work deals with a hiding scheme, more precisely a power-equalization technique which is ideally supposed to make the amount of power consumption of the device independent of its processed data. We propose and practically evaluate a novel construction dedicated to Xilinx FPGAs, which rules out the state of the art with respect to the achieved security level and the resource overhead.
Applying random and uniform masks to the processed intermediate values of cryptographic algorithms is arguably the most common countermeasure to thwart side-channel analysis attacks. So-called masking schemes exist in various shapes but are mostly used to prevent side-channel leakages up to a certain statistical order. Thus, to learn any information about the key-involving computations a side-channel adversary has to estimate the higher-order statistical moments of the leakage distributions. However, the complexity of this approach increases exponentially with the statistical order to be estimated and the precision of the estimation suffers from an enormous sensitivity to the noise level. In this work we present an alternative procedure to exploit higher-order leakages which captivates by its simplicity and effectiveness. Our approach, which focuses on (but is not limited to) univariate leakages of hardware masking schemes, is based on categorizing the power traces according to the distribution of leakage points. In particular, at each sample point an individual subset of traces is considered to mount ordinary first-order attacks. We present the theoretical concept of our approach based on simulation traces and examine its efficiency on noisy real-world measurements taken from a first-order secure threshold implementation of the block cipher PRESENT-80, implemented on a 150nm CMOS ASIC prototype chip. Our analyses verify that the proposed technique is indeed a worthy alternative to conventional higher-order attacks and suggest that it might be able to relax the sensitivity of higher-order evaluations to the noise level.
We investigate the post-quantum security of hash functions based on the sponge construction. A crucial property for hash functions in the post-quantum setting is the collapsing property (a strengthening of collision-resistance). We show that the sponge construction is collapsing (and in consequence quantum collision-resistant) under suitable assumptions about the underlying block function. In particular, if the block function is a random function or a (non-invertible) random permutation, the sponge construction is collapsing.
ePrint Report Practical Secure Aggregation for Privacy Preserving Machine Learning Keith Bonawitz, Vladimir Ivanov, Ben Kreuter, Antonio Marcedone, H. Brendan McMahan, Sarvar Patel, Daniel Ramage, Aaron Segal, Karn Seth
We design a novel, communication-efficient, failure-robust protocol for secure aggregation of high-dimensional data. Our protocol allows a server to compute the sum of large, user-held data vectors from mobile devices in a secure manner (i.e. without learning each user's individual contribution), and can be used, for example, in a federated learning setting, to aggregate user-provided model updates for a deep neural network. We prove the security of our protocol in the honest-but-curious and malicious settings, and show that security is maintained even if an arbitrarily chosen subset of users drop out at any time. We evaluate the efficiency of our protocol and show, by complexity analysis and a concrete implementation, that its runtime and communication overhead remain low even on large data sets and client pools. For 16-bit input values, our protocol offers $1.73\times$ communication expansion for $2^{10}$ users and $2^{20}$-dimensional vectors, and $1.98\times$ expansion for $2^{14}$ users and $2^{24}$-dimensional vectors over sending data in the clear.
For a linear function $f$, a vector $\mathbf x$ with small coefficients, and a vector $y=f(\mathbf x)$, we would like to be able to give a zero-knowledge proof for the knowledge of an $\mathbf x'$ with small coefficients that satisfies $f(\mathbf x')=y$. This is a common scenario in lattice-based cryptography, and there is currently no satisfactory solution for this problem. All known protocols are built via the repetition a basic protocol that only has constant ($1/2$ or $2/3$) soundness error. This implies that the communication complexity of the final protocol will be at least a factor of $k$ larger than that of the basic one, where $k$ is the security parameter.

One can do better if one considers simultaneously proving the knowledge of many instances of the above linear equation. The protocol that has the smallest amortized communication complexity while achieving close-to-optimal slack (i.e. the ratio between the coefficients in the secret and those that can be extracted from the proof) is due to Cramer et al. (Eurocrypt '17) which builds on an earlier work of Baum et al. (Crypto '16). The main downside of this protocol is that the amortization only kicks in when the number of equations is rather large -- $4k^2$. This means that for $k=128$, it is only truly optimal when one has more than $2^{16}$ equations to prove. The aforementioned work of Cramer et al. also shows how to achieve a protocol requiring $o(k^2)$ samples, but it is only applicable for much larger values of $k$ and the number of required samples ends up being larger than $2^{16}$.

The main result of our work is reducing the concrete minimal number of equations required for the amortization, while keeping the communication complexity almost unchanged. The cost of this is an increase in the running time of the zero-knowledge proof. More specifically, we show that one can decrease the required number of equations by a factor of $\Omega(\log^2{\alpha})$ at the cost of increasing the running time by a factor of $\Omega(\alpha)$. For example, increasing the running time by a factor of $8$ allows us to decrease the required number of samples from $66000$ to $4500$ -- a factor of $14$. As a side benefit, the slack of our protocol decreases by a factor of $\log{\alpha}$ as well.

We also show that in the case that $f$ is a function over the polynomial ring $\mathbb Z[X]/(X^d+1)$ and we would like to give a proof of knowledge of an $\mathbf x'$ with small coefficients such that $f(\mathbf x')=2y$, then the number of samples needed for amortization is even lower. Without any trade-offs in the running time, our algorithm requires around $2000$ samples, and for the same factor $8$ increase in the running time, the requirement goes down to $850$.
ePrint Report Post-Quantum Zero-Knowledge and Signatures from Symmetric-Key Primitives Melissa Chase, David Derler, Steven Goldfeder, Claudio Orlandi, Sebastian Ramacher, Christian Rechberger, Daniel Slamanig, Greg Zaverucha
We propose a new class of post-quantum digital signature schemes that: (a) derive their security entirely from the security of symmetric-key primitives, believed to be quantum-secure, and (b) have extremely small keypairs, and, (c) are highly parametrizable.

In our signature constructions, the public key is an image y=f(x) of a one-way function f and secret key x. A signature is a non-interactive zero-knowledge proof of x, that incorporates a message to be signed. For this proof, we leverage recent progress of Giacomelli et al. (USENIX'16) in constructing an efficient sigma protocol for statements over general circuits. We improve this sigma protocol to reduce proof sizes by a factor of two, at no additional computational cost. While this is of independent interest as it yields more compact proofs for any circuit, it also decreases our signature sizes.

We consider two possibilities for making the proof non-interactive, the Fiat-Shamir transform, and Unruh's transform (EUROCRYPT'12,'15,'16). The former has smaller signatures, while the latter has a security analysis in the quantum-accessible random oracle model. By customizing Unruh's transform to our application, the overhead is reduced to 1.6x when compared to the Fiat-Shamir transform, which does not have a rigorous post-quantum security analysis.

We implement and benchmark both approaches and explore the possible choice of f, taking advantage of the recent trend to strive for practical symmetric ciphers with a particularly low number of multiplications and end up using LowMC.
27 March 2017
Invariant subspace attack is a novel cryptanalytic technique which breaks several recently proposed lightweight block ciphers. In this paper, we propose a new method to bound the dimension of some invariant subspaces in a class of lightweight block ciphers which have a similar structure as the AES but with 4-bit Sboxes. With assumptions on the diffusion layer, the dimension of any invariant subspaces is at most 32 when the inputs into each Sboxes are linearly independent. The observation brings new insights about the invariant subspace attack, as well as lightweight countermeasures to enhance the resistance against it.
26 March 2017
In the study of cryptography in $\text{NC}^0$, it was previously known that Goldreich's candidate pseudorandom generator (PRG) is insecure when instantiated with a predicate $P$ in $4$ or fewer variables, if one wants to achieve polynomial stretch (that is, stretching $n$ bits to $n^{1+\epsilon}$ bits for some constant $\epsilon>0$). The current standard candidate predicate for this setting is the ``tri-sum-and'' predicate $\text{TSA}(x) = \text{XOR}_3 \oplus \text{AND}_2(x) = x_1\oplus x_2\oplus x_3\oplus x_4x_5$, yielding a candidate PRG of locality $5$. Moreover, Goldreich's PRG, when instantiated with TSA as the predicate, is known to be secure against several families of attacks, including $\mathbb F_2$-linear attacks and attacks using SDP hierarchies such as the Lasserre/Parrilo sum-of-squares hierarchy.

However, it was previously unknown if $\text{TSA}$ is an ``optimal'' predicate according to other complexity measures: in particular, decision tree (DT-)complexity (i.e., the smallest depth of a binary decision tree computing $P$) and $\mathbb Q$-degree (i.e., the degree of $P$ as a polynomial over $\mathbb Q$), which are important measures of complexity in cryptographic applications such as the construction of an indistinguishability obfuscation scheme. In this work, we ask: Can Goldreich's PRG be instantiated with a predicate with DT-complexity or $\mathbb Q$-degree less than $5$?

We show that this is indeed possible: we give a candidate predicate for Goldreich's PRG with DT-complexity $4$ and $\mathbb Q$-degree $3$; in particular, this candidate PRG therefore has the property that every output bit is a degree 3 polynomial in its input. Moreover, Goldreich's PRG instantiated with our predicate has security properties similar to what is known for $\text{TSA}$, namely security against $\mathbb F_2$-linear attacks and security against attacks from SDP hierarchies such as the Lasserre/Parrilo sum-of-squares hierarchy.

We also show that all predicates with either DT-complexity less than $4$ or $\mathbb Q$-degree less than $3$ yield insecure PRGs, so our candidate predicate simultaneously achieves the best possible locality, DT-complexity, $\mathbb Q$-degree, and $\mathbb F_2$-degree according to all known attacks.
We show how to obfuscate a large and expressive class of programs, which we call compute-and-compare programs, under the learning-with-errors (LWE) assumption. Each such program $CC[f,y]$ is parametrized by an arbitrary polynomial-time computable function $f$ along with a target value $y$ and we define $CC[f,y](x)$ to output $1$ if $f(x)=y$ and $0$ otherwise. In other words, the program performs an arbitrary computation $f$ and then compares its output against a target $y$. Our obfuscator satisfies distributional virtual-black-box security, which guarantees that the obfuscated program does not reveal any partial information about the function $f$ or the target value $y$ as long as they are chosen from some distribution where $y$ has sufficient pseudo-entropy given $f$. We also extend our result to multi-bit compute-and-compare programs $MBCC[f,y,z](x)$ which output a message $z$ if $f(x)=y$.

Compute-and-compare programs are powerful enough to capture many interesting obfuscation tasks as special cases. This includes obfuscating conjunctions, and therefore we improve on the prior work of Brakerski et al. (ITCS '16) which constructed a conjunctions obfuscator under a non-standard ``entropic'' ring-LWE assumption, while here we obfuscate a significantly broader class of programs under standard LWE. We show that our obfuscator has several interesting applications. For example, we can take an arbitrary encryption scheme and publish an obfuscated plaintext equality tester that allows users to test whether a ciphertext encrypts some target value $y$; as long as $y$ has sufficient pseudo-entropy this will not harm semantic security. We can also use our obfuscator to generically upgrade attribute-based encryption to predicate encryption as well as witness encryption to indistinguishability obfuscation which is secure for all null circuits. Furthermore, we show that our obfuscator gives new circular-security counter-examples for public-key bit encryption and for unbounded length key cycles.

Our result uses the graph-induced multi-linear maps of Gentry, Gorbunov and Halevi (TCC '15), but only in a carefully restricted manner which is provably secure under LWE. Our technique is inspired by ideas introduced in a recent work of Goyal, Koppula and Waters (EUROCRYPT '17) in a seemingly unrelated context.
25 March 2017
We propose simple generic constructions of indistinguishability obfuscator (IO). Our key tool is exponentially-efficient indistinguishability obfuscator (XIO), which is the same as IO except that the size of an obfuscated circuit (or the running-time of an obfuscator) is slightly smaller than that of a brute-force canonicalizer that outputs the entire truth table of a circuit to be obfuscated. A ``compression factor'' of XIO indicates how much XIO compresses the brute-force canonicalizer. In this study, we show that XIO is a powerful enough to achieve cutting-edge cryptography. In particular, we propose the following constructions:

1. A single-key weakly succinct secret-key functional encryption (SKFE) scheme is constructed from XIO (even with a bad compression factor) and one-way function. 2. A single-key weakly succinct public-key functional encryption (PKFE) scheme is constructed from XIO with a good compression factor and public-key encryption scheme. 3. A single-key weakly succinct PKFE scheme is constructed from XIO (even with a bad compression factor) and identity-based encryption scheme.

It is known that sub-exponentially secure single-key weakly succinct PKFE scheme implies IO and that single-key weakly succinct (resp. multi-key non-succinct) SKFE implies XIO with a bad (resp. good) compression factor. Thus, we developed two methods of constructing IO. One uses multi-key SKFE and plain public-key encryption schemes and the other uses single-key weakly succinct SKFE (or XIO) and identity-based encryption schemes. It is not known whether single-key weakly succinct SKFE implies IO (if we use fully black-box reduction in a certain model, it is impossible), but our single-key weakly succinct SKFE scheme gives many interesting by-products.
ePrint Report Lockable Obfuscation Rishab Goyal, Venkata Koppula, Brent Waters
In this paper we introduce the notion of lockable obfuscation. In a lockable obfuscation scheme there exists an obfuscation algorithm $\mathsf{Obf}$ that takes as input a security parameter $\lambda$, a program $P$, a message $\mathsf{msg}$ and ``lock value'' $\alpha$ and outputs an obfuscated program $\widetilde{P}$. One can evaluate the obfuscated program $\widetilde{P}$ on any input $x$ where the output of evaluation is the message $\mathsf{msg}$ if $P(x) = \alpha$ and otherwise receives a rejecting symbol $\perp$.

We proceed to provide a construction of lockable obfuscation and prove it secure under the Learning with Errors (LWE) assumption. Notably, our proof only requires LWE with polynomial hardness and does not require complexity leveraging.

We follow this by describing multiple applications of lockable obfuscation. First, we show how to transform any attribute-based encryption (ABE) scheme into one in which the attributes used to encrypt the message are hidden from any user that is not authorized to decrypt the message. (Such a system is also know as predicate encryption with one-sided security.) The only previous construction due to Gorbunov, Vaikuntanathan and Wee is based off of a specific ABE scheme of Boneh et al. By enabling the transformation of any ABE scheme we can inherent different forms and features of the underlying scheme such as: multi-authority, adaptive security from polynomial hardness, regular language policies, etc.

We also show applications of lockable obfuscation to separation and uninstantiability results. We first show how to create new separation results in circular encryption that were previously based on indistinguishability obfuscation. This results in new separation results from learning with error including a public key bit encryption scheme that it IND-CPA secure and not circular secure. The tool of lockable obfuscation allows these constructions to be almost immediately realized by translation from previous indistinguishability obfuscation based constructions. In a similar vein we provide random oracle uninstantiability results of the Fujisaki-Okamoto transformation (and related transformations) from the lockable obfuscation combined with fully homomorphic encryption. Again, we take advantage that previous work used indistinguishability obfuscation that obfuscated programs in a form that could easily be translated to lockable obfuscation.
Non-malleable commitment is a fundamental cryptographic tool for preventing man-in-the-middle attacks. Since its proposal by Dolev, Dwork, and Noar in 1991, a rich line of research has steadily reduced the number of communication rounds needed for non-malleable commitment, towards the ultimate goal of constructing non-interactive non-malleable commitment from well-studied hardness assumptions. However, this successful research recently hit a barrier: Pass showed that 2-round non-malleable commitment cannot be based on any, even subexponentially secure, falsifiable assumptions, via black-box security reductions [Pass, STOC 2011], and the only known construction of non-interactive non-malleable commitment is based on a new and non-falsifiable assumption [Pandey, Pass, Vaikuntanathan, Crypto 2008].

In this work, we present a construction of 2-round non-malleable commitment from time-lock puzzles; they are "mechanisms for sending messages to the future" proposed by Rivest, Shamir, and Wagner in 1996, whose original construction has withstood cryptanalysis for two decades. In addition, our construction uses a subexponentially secure injective one-way function and a non-interactive witness indistinguishable proof system. The key to circumventing Pass's impossibility result lies in leveraging the different nature of hardness provided by time-lock puzzles and classical cryptographic primitives. Conventionally, cryptographic hardness is defined against adversaries with bounded time (or equivalently circuit-size); in contrast, the hardness of time-lock puzzles holds against adversaries with bounded parallel-time (or circuit-depth). This difference allows us to construct commitment schemes that are simultaneously harder than each other according to different complexity measures, which imply a weak form of non-malleability. It is then strengthened through a new 2-round non-malleability amplification technique, and the final protocol is non-malleable even in the fully concurrent setting.

To the best of our knowledge, this is the first time that time-lock puzzles are used constructively outside time-released cryptography, and opens an interesting direction of combining hardness w.r.t. different complexity measures for achieving cryptographic goals.
In leakage-resilient symmetric cryptography, two important concepts have been proposed in order to decrease the success rate of differential side-channel attacks. The first one is to limit the attacker’s data complexity by restricting the number of observable inputs; the second one is to create correlated algorithmic noise by using parallel S-boxes with equal inputs. The latter hinders the typical divide and conquer approach of differential side-channel attacks and makes key recovery much more difficult in practice. The use of localized electromagnetic (EM) measurements has already been shown to limit the effectiveness of such measures in previous works based on PRESENT S-boxes and 90nm FPGAs. However, it has been left for future investigation in recent publications based on AES S-boxes. We aim at providing helpful results and insights from LDA-preprocessed, multivariate, localized EM attacks against a 45nm FPGA implementation using AES S-boxes. We show, that even in the case of densely placed S-boxes (with identical routing constraints), and even when limiting the data complexity to the minimum of only two inputs, the guessing entropy of the key is reduced to only 2^48 , which remains well within the key enumeration capabilities of today’s adversaries. Relaxing the S-box placement constraints further reduces the guessing entropy. Also, increasing the data complexity for efficiency, decreases it down to a direct key recovery. While our results are empirical and reflective of one device and implementation, they emphasize the threat of multivariate localized EM attacks to such AES-based leakage-resilient constructions, more than currently believed.
ePrint Report High Order Masking of Look-up Tables with Common Shares Jean-Sebastien Coron, Franck Rondepierre, Rina Zeitoun
Masking is an effective countermeasure against side-channel attacks. In this paper, we improve the efficiency of the high-order masking of look-up tables countermeasure introduced at Eurocrypt 2014, based on a combination of three techniques, and still with a proof of security in the Ishai-Sahai-Wagner (ISW) probing model. The first technique consists in proving security under the stronger t-SNI definition, which enables to use n=t+1 shares instead of n=2t+1 against t-th order attacks. The second technique consists in progressively incrementing the number of shares within the countermeasure, from a single share to n, thereby reducing the complexity of the countermeasure. The third technique consists in adapting the common shares approach introduced by Coron et al. at CHES 2016, so that half of a randomized look-up table can be pre-computed for multiple SBoxes. When combined, our three techniques lead to a factor 10.7 improvement in efficiency, asymptotically for a large number of shares n. For a practical implementation with a reasonable number of shares, we get a 4.8 speed-up factor compared to the initial countermeasure for AES.
Rational proofs, introduced by Azar and Micali (STOC 2012), are a variant of interactive proofs in which the prover is rational, and may deviate from the protocol for increasing his reward. Guo et al.\ (ITCS 2014) demonstrated that rational proofs are relevant to delegation of computation. By restricting the prover to be computationally bounded, they presented a one-round delegation scheme with sublinear verification for functions computable by log-space uniform circuits with logarithmic depth. In this work, we study rational proofs in which the verifier is also rational, and may deviate from the protocol for decreasing the prover's reward. We construct a three-message delegation scheme with sublinear verification for functions computable by log-space uniform circuits with polylogarithmic depth in the random oracle model.
Side channel analysis and fault attacks are two powerful methods to analyze and break cryptographic implementations. Recently, secure multiparty computation has been applied to prevent side channel attacks. While multiparty computation is known to be fault resistant as well, the particular schemes popular for side channel protection do not currently offer this feature. In this paper we introduce a new secure multiparty circuit to prevent both fault attacks and side channel analysis. The new scheme builds on an existing side channel countermeasure and extends it to preserve errors and propagate them until the end of the circuit. A new recombination operation ensures randomization of the output in the case of an error, ensuring that nothing can be learned from the faulty output. After introducing the new secure multiparty circuit, we show how it can be applied to AES and present the performance and security analysis.
ePrint Report Efficient Sanitizable Signatures without Random Oracles Russell W. F. Lai, Tao Zhang, Sherman S. M. Chow, Dominique Schröder
Sanitizable signatures, introduced by Ateniese et al. (ESORICS '05), allow the signer to delegate the sanitization right of signed messages. The sanitizer can modify the message and update the signature accordingly, so that the sanitized part of the message is kept private. For a stronger protection of sensitive information, it is desirable that no one can link sanitized message-signature pairs of the same document. This idea was formalized by Brzuska et al. (PKC '10) as unlinkability, which was followed up recently by Fleischhacker et al. (PKC '16). Unfortunately, the existing generic constructions of sanitizable signatures, unlinkable or not, are based on building blocks with specially crafted features of which efficient (standard model) instantiations are absent. Basing on existing primitives or a conceptually simple primitive is more desirable.

In this work, we present two such generic constructions, leading to efficient instantiations in the standard model. The first one is based on rerandomizable tagging, a new primitive which may find independent interests. It captures the core accountability mechanism of sanitizable signatures. The second one is based on accountable ring signatures (CARDIS '04, ESORICS '15). As an intermediate result, we propose the first accountable ring signature scheme in the standard model.
Recently, gray-box attacks on white-box cryptographic implementations have succeeded. These attacks are more efficient than white-box attacks because they can be performed without detailed knowledge of the target implementation. The success of the gray-box attack is due to the unbalanced encoding used to generate the white-box lookup table. In this paper, we propose a method to protect the gray-box attack against white-box implementations. The basic idea is to use Boolean masking before encoding intermediate values during the white-box lookup table generation. Compared to the existing white-box AES implementation, the lookup table size and the table lookups increase by about 1.5- and 1.6 times, respectively.
Polytopic cryptanalysis was introduced at EUROCRYPT 2016 as a cryptanalytic technique for low-data-complexity attacks on block ciphers. In this paper, we give an account of how the technique was developed, quickly go over the basic ideas and techniques of polytopic cryptanalysis, look into how the technique differs from previously existing cryptographic techniques, and discuss whether the attack angle can be useful for developing improved cryptanalytic techniques.
This paper puts forward an efficient broadcast encryption in public key setting employing ternary tree subset difference method for revocation. It provides outsider anonymity disabling the revoked users from getting any information of message and concealing the set of subscribed users from the revoked users. Our approach utilizes composite order bilinear group setting and exhibits significant improvement in the broadcast efficiency. The proposed scheme compares favourably over the existing similar schemes in standard model. The public key and secret key sizes are poly-logarithmic while the ciphertext size is sub linear in total number of users. Our scheme achieves selective security against chosen plaintext attack in the standard model under reasonable assumptions.
ePrint Report A note on how to (pre-)compute a ladder Thomaz Oliveira, Julio López, Francisco Rodríguez-Henríquez
In the RFC 7748 memorandum, the Internet Research Task Force specified a Montgomery ladder scalar multiplication function based on two recently proposed prime elliptic curves. The purpose of this function is to support the Diffie-Hellman key exchange algorithm included in the coming version of the Transport Layer Security cryptographic protocol. In this paper, we introduce a Montgomery-ladder variant that permits to accelerate the fixed-point multiplication function when applied in the Diffie-Hellman key pair generation step. Our function combines a right-to-left variant of the Montgomery ladder with the pre-computation of multiples of the base point, and by requiring very modest memory resources and a small implementation effort with respect to the RFC 7748 programming code, it obtains significant performance improvements on desktop architectures. To our knowledge, this is the first proposal of a Montgomery ladder procedure for prime elliptic curves that admits the extensive use of pre-computation.
We solve the problem of finding the success rate of an optimal side-channel attack targeting at once the first and the last round of a block cipher. We relate the results to the properties of the direct and inverse substitution boxes (when they are bijective), in terms of confusion coefficients.
Most modern actively secure multiparty computation protocols make use of a function and input independent pre-processing phase. This pre-processing phase is tasked with producing some form of correlated randomness and distributing it to the parties. Whilst the “online” phase of such protocols is exceedingly fast, the bottleneck comes in the pre-processing phase. In this paper we examine situations where the computing parties in the online phase may want to outsource the pre-processing phase to another set of parties, or to a sub-committee. We examine how this can be done, and also describe situations where this may be a benefit.
ePrint Report Side-channel Analysis of Lightweight Ciphers: Does Lightweight Equal Easy? Annelie Heuser, Stjepan Picek, Sylvain Guilley, Nele Mentens
Side-channel attacks represent a powerful category of attacks against cryptographic devices. Still, side-channel analysis for lightweight ciphers is much less investigated than for instance for AES. Although intuition may lead to the conclusion that lightweight ciphers are weaker in terms of side-channel resistance, that remains to be confirmed and quantified. In this paper, we consider various side-channel analysis metrics which should provide an insight on the resistance of lightweight ciphers against side-channel attacks. In particular, for the non-profiled scenario we use the theoretical confusion coefficient and empirical correlation power analysis. Furthermore, we conduct a profiled side-channel analysis using various machine learning attacks on PRESENT and AES. Our results show that the difference between AES and lightweight ciphers is smaller than one would expect. Interestingly, we observe that the studied 4-bit S-boxes have a different side-channel resilience, while the difference in the 8-bit ones is only theoretically present.
This paper explores a new type of MACs called message-recovery MACs (MRMACs). MRMACs have an additional input $R$ that gets recovered upon verification. Receivers must execute verification in order to recover $R$, making the verification process unskippable. Such a feature helps avoid mis-implementing verification algorithms. The syntax and security notions of MRMACs are rigorously formulated. In particular, we formalize the notion of unskippability and present a construction of an unskippable MRMAC from a tweakable cipher and a universal hash function. Our construction is provided with formal security proofs. We extend the idea of MRMACs to a new type of authenticated encryption called verification-unskippable AE (VUAE). We propose a generic Enc-then-MRMAC composition which realizes VUAE. The encryption part needs to satisfy a new security notion called one-time undecipherability. We provide three constructions that are one-time undecipherable, and they are proven secure under various security models.
Sampling integers with Gaussian distribution is a fundamental problem that arises in almost every application of lattice cryptography, and it can be both time consuming and challenging to implement. Most previous work has focused on the optimization and implementation of integer Gaussian sampling in the context of specific applications, with fixed sets of parameters. We present new algorithms for discrete Gaussian sampling that are both generic (application independent), efficient, and more easily implemented in constant time without incurring a substantial slow-down, making them more resilient to side-channel (e.g., timing) attacks. As an additional contribution, we present new analytical techniques that can be used to simplify the precision/security evaluation of floating point cryptographic algorithms, and an experimental comparison of our algorithms with previous algorithms from the literature.
ePrint Report Pseudorandomness of Ring-LWE for Any Ring and Modulus Chris Peikert, Oded Regev, Noah Stephens-Davidowitz
We give a polynomial-time quantum reduction from worst-case (ideal) lattice problems directly to the decision version of (Ring-)LWE. This extends to decision all the worst-case hardness results that were previously known for the search version, for the same or even better parameters and with no algebraic restrictions on the modulus or number field. Indeed, our reduction is the first that works for decision Ring-LWE with any number field and any modulus.
ePrint Report Threshold Fully Homomorphic Encryption Aayush Jain, Peter M. R. Rasmussen, Amit Sahai
We formally define and give the first construction of (leveled) threshold fully homomorphic encryption for any access structure induced by a monotone boolean formula and in particular for the threshold access structure. Our construction is based on the learning with errors assumption and can be instantiated with any existing homomorphic encryption scheme that satisfies fairly general conditions, such as Gentry, Sahai, Waters (CRYPTO 2013) and Brakerski, Gentry, Vaikuntanathan (ITCS 2012).

From threshold homomorphic encryption, we construct function secret sharing and distributed pseudorandom functions for the aforementioned access structures. No such constructions were known prior to this work.
20 March 2017
The analysis of real-world protocols, in particular key exchange protocols and protocols building on these protocols, is a very complex, error-prone, and tedious task. Besides the complexity of the protocols itself, one important reason for this is that the security of the protocols has to be reduced to the security of the underlying cryptographic primitives for every protocol time and again.

We would therefore like to get rid of reduction proofs for real-world key exchange protocols as much as possible and in many cases altogether, also for higher-level protocols which use the exchanged keys. So far some first steps have been taken in this direction. But existing work is still quite limited, and, for example, does not support Diffie-Hellman (DH) key exchange, a prevalent cryptographic primitive for real-world protocols.

In this paper, building on work by K{\"u}sters and Tuengerthal, we provide an ideal functionality in the universal composability setting which supports several common cryptographic primitives, including DH key exchange. This functionality helps to avoid reduction proofs in the analysis of real-world protocols and often eliminates them completely. We also propose a new general ideal key exchange functionality which allows higher-level protocols to use exchanged keys in an ideal way. As a proof of concept, we apply our framework to three practical DH key exchange protocols, namely ISO 9798-3, SIGMA, and OPTLS.

  older items