International Association for Cryptologic Research

IACR News Central

Here you can see all recent updates to the IACR webpage. These updates are also available:

Now viewing news items related to:

18 October 2017
We provide an alternative method for constructing lattice-based digital signatures which does not use the ``hash-and-sign'' methodology of Gentry, Peikert, and Vaikuntanathan (STOC 2008). Our resulting signature scheme is secure, in the random oracle model, based on the worst-case hardness of the $\tilde{O}(n^{1.5})-SIVP$ problem in general lattices. The secret key, public key, and the signature size of our scheme are smaller than in all previous instantiations of the hash-and-sign signature, and our signing algorithm is also much simpler, requiring just a few matrix-vector multiplications and rejection samplings. We then also show that by slightly changing the parameters, one can get even more efficient signatures that are based on the hardness of the Learning With Errors problem. Our construction naturally transfers to the ring setting, where the size of the public and secret keys can be significantly shrunk, which results in the most practical to-date provably secure signature scheme based on lattices.
17 October 2017
We explore a new security model for secure computation on large datasets. We assume that two servers have been employed to compute on private data that was collected from many users, and, in order to improve the efficiency of their computation, we establish a new tradeoff with privacy. Specifically, instead of claiming that the servers learn nothing about the input values, we claim that what they do learn from the computation preserves the differential privacy of the input. Leveraging this relaxation of the security model allows us to build a protocol that leaks some information in the form of access patterns to memory, while also providing a formal bound on what is learned from the leakage.

We then demonstrate that this leakage is useful in a broad class of computations. We show that computations such as histograms, PageRank and matrix factorization, which can be performed in common graph-parallel frameworks such as MapReduce or Pregel, benefit from our relaxation. We implement a protocol for securely executing graph-parallel computations, and evaluate the performance on the three examples just mentioned above. We demonstrate marked improvement over prior implementations for these computations.
ePrint Report A Faster Software Implementation of the Supersingular Isogeny Diffie-Hellman Key Exchange Protocol Armando Faz-Hern\'andez, Julio L\'opez, Eduardo Ochoa-Jim\'enez, Francisco Rodr\'iguez-Henr\'iquez
Since its introduction by Jao and De Feo in 2011, the supersingular isogeny Diffie-Hellman (SIDH) key exchange protocol has positioned itself as a promising candidate for post-quantum cryptography. One salient feature of the SIDH protocol is that it requires exceptionally short key sizes. However, the latency associated to SIDH is higher than the ones reported for other post-quantum cryptosystem proposals. Aiming to accelerate the SIDH runtime performance, we present in this work several algorithmic optimizations targeting both elliptic-curve and field arithmetic operations. We introduce in the context of the SIDH protocol a more efficient approach for calculating the elliptic curve operation P + [k]Q. Our strategy achieves a factor 1.4 speedup compared with the popular variable-three-point ladder algorithm regularly used in the SIDH shared secret phase. Moreover, profiting from pre-computation techniques our algorithm yields a factor 1.7 acceleration for the computation of this operation in the SIDH key generation phase. We also present an optimized evaluation of the point tripling formula, and discuss several algorithmic and implementation techniques that lead to faster field arithmetic computations. A software implementation of the above improvements on an Intel Skylake Core i7-6700 processor gives a factor 1.33 speedup against the state-of-the-art software implementation of the SIDH protocol reported by Costello-Longa-Naehrig in CRYPTO 2016.
ePrint Report Attacking Deterministic Signature Schemes using Fault Attacks Damian Poddebniak, Juraj Somorovsky, Sebastian Schinzel, Manfred Lochter, Paul Rösler
Many digital signature schemes rely on random numbers that are unique and non-predictable per signature. Failures of random number generators may have catastrophic effects such as compromising private signature keys. In recent years, many widely-used cryptographic technologies adopted deterministic signature schemes because they are presumed to be safer to implement.

In this paper, we analyze the security of deterministic ECDSA and EdDSA signature schemes and show that the elimination of random number generators in these schemes enables new kinds of fault attacks. We formalize these attacks and introduce practical attack scenarios against EdDSA using the Rowhammer fault attack. EdDSA is used in many widely used protocols such as TLS, SSH and IPSec, and we show that these protocols are not vulnerable to our attack. We formalize the necessary requirements of protocols using these deterministic signature schemes to be vulnerable, and discuss mitigation strategies and their effect on fault attacks against deterministic signature schemes.
ePrint Report Homomorphic SIMMD Operations: Single Instruction Much More Data Wouter Castryck, Ilia Iliashenko, Frederik Vercauteren
In 2014, Smart and Vercauteren introduced a packing technique for homomorphic encryption schemes by decomposing the plaintext space using the Chinese Remainder Theorem. This technique allows to encrypt multiple data values simultaneously into one ciphertext and execute Single Instruction Multiple Data operations homomorphically. In this paper we improve and generalize their results by introducing a flexible Laurent polynomial encoding technique and by using a more fine-grained CRT decomposition of the plaintext space. The Laurent polynomial encoding provides a convenient common framework for all conventional ways in which input data types can be represented, e.g.finite field elements, integers, rationals, floats and complex numbers. Our methods greatly increase the packing capacity of the plaintext space, as well as one's flexibility in optimizing the system parameters with respect to efficiency and/or security.
ePrint Report Conditional Cube Attack on Round-Reduced River Keyak Wenquan Bi, Zheng Li, Xiaoyang Dong, Lu Li, Xiaoyun Wang
This paper evaluates the security level of the River Keyak against the cube-like attack. River Keyak is the only lightweight scheme of the Keccak-permutation-based Authenticated Encryption Cipher Keyak, which is one of the 16 survivors of the 3rd round CAESAR competition. Dinur et al. gave the seven-round cube-like attack on Lake Keyak (1600-bit) using the divide-and-conquer method at EUROCRYPT 2015, then Huang et al. improved the result to 8-round using a new conditional cube attack at EUROCRYPT 2017. While for River Keyak, the 800-bit state is so small that the equivalent key (256-bit capacity) occupy double lanes, the attacks can not be applied to the River Keyak trivially.

In this paper, we comprehensively explore the conditional cube attack on the small state (800-bit) River Keyak. Firstly, we find a new conditional cube variable which has a much weaker diffusion than Huang et al.'s, this makes the conditional cube attack possible for small state (800-bit) River Keyak. Then we find enough cube variables for 6/7-round River Keyak and successfully launch the key recovery attacks on 6/7-round River Keyak with the time complexity $2^{33}$ and $2^{49}$ respectively. We also verify the 6 and 7-round attack on a laptop. Finally, by using linear structure technique with our new conditional cube variable, we greatly increase the freedom degree to find more cube variables for conditional cube attacks as it is complex for 800-bit state to find enough cube variables for 8-round attack. And then we use the new variables by this new method to launch 8-round conditional cube attack with the time complexity $2^{81}$. These are the first cryptanalysis results on round-reduced River Keyak. Our attacks do not threaten the full-round (12) River Keyak.
13 October 2017
Oblivious Transfer (OT) is a simple, yet fundamental primitive which suffices to achieve almost every cryptographic application. In a recent work (Latincrypt `15), Chou and Orlandi (CO) present the most efficient, fully UC-secure OT protocol to date and argue its security under the CDH assumption. Unfortunately, a subsequent work by Genc et al. (Eprint `17) exposes a flaw in their proof which renders the CO protocol insecure. In this work, we make the following contributions: We first point out two additional, previously undiscovered flaws in the CO protocol and then show how to patch the proof with respect to static and malicious corruptions in the UC model under the stronger Gap Diffie-Hellman (GDH) assumption. With the proof failing for adaptive corruptions even under the GDH assumption, we then present a novel OT protocol which builds on ideas from the CO protocol and can be proven fully UC-secure under the CDH assumption. Interestingly, our new protocol is actually significantly more efficient (roughly by a factor of two) than the CO protocol. This improvement is made possible by avoiding costly redundancy in the symmetric encryption scheme used in the CO protocol. Our ideas can also be applied to the original CO protocol, which yields a similar gain in efficiency.
Digital rights management is an important technique to protect digital contents from abuse. Usually it is confronted with severe challenges because of the untrusted environment its application executed in. This condition is formally described as white-box attack model. White-box cryptography aims at protecting software implementation of cryptographic algorithms from white-box attack, hence can be employed to provide security guarantee for digital rights management. Key extraction, code lifting, and illegal distribution are three major threats in digital rights management application, they extremely compromise the benefit of content producer. In this paper, we propose the first solution based on white-box cryptography against the three threats above simultaneously, by implementing traceability of a white-box scheme which has unbreakability and incompressibility. Specifically, We constructively confirm there exists secure white-box compiler which can generate traceable white-box programs, by hiding slight perturbations in the lookup-table based white-box implementation. Security and performance analyses show our solution can be effectively achieved in practice.
ePrint Report Architecture level Optimizations for Kummer based HECC on FPGAs Gabriel Gallin, Turku Ozlum Celik, Arnaud Tisserand
On the basis of a software implementation of Kummer based HECC over Fp presented in 2016, we propose new hardware architectures. Our main objectives are: definition of architecture parameters (type, size and number of units for arithmetic operations, memory and internal communications); architecture style optimization to exploit internal par-allelism. Several architectures have been designed and implemented on FPGAs for scalar multiplication acceleration in embedded systems. Our results show significant area reduction for similar computation time than best state of the art hardware implementations of curve based solutions.
ePrint Report Automatic Characterization of Exploitable Faults: A Machine Learning Approach Sayandeep Saha, Dirmanto Jap, Sikhar Patranabis, Debdeep Mukhopadhyay, Shivam Bhasin, Pallab Dasgupta
Characterization of the fault space of a cipher to filter out a set of faults potentially exploitable for fault attacks (FA), is a problem with immense practical value. A quantitative knowledge of the exploitable fault space is desirable in several applications, like security evaluation, cipher construction and implementation, design, and testing of countermeasures etc. In this work, we investigate this problem in the context of block ciphers. The formidable size of the fault space of a block cipher suggests for an automation to solve this problem, which should be able to characterize each individual fault instance quickly. On the other hand, the automation is expected to be applicable to most of the block cipher constructions. Existing techniques for automated fault attacks do not satisfy both of these goals simultaneously and hence are not directly applicable in the context of exploitable fault characterization. In this paper, we present a supervised machine learning (ML) assisted automated framework, which successfully addresses both of the criteria mentioned. The key idea is to extrapolate the knowledge of some existing FAs on a cipher to rapidly figure out new attack instances on the same. Experimental validation of the proposed framework on two state-of-the-art block ciphers – PRESENT and LED, establishes that our approach is able to provide fairly good accuracy in identifying exploitable fault instances at a reasonable cost. Finally, the effect of different S-Boxes on the fault space of a cipher is evaluated utilizing the framework.
Protecting malware using encryption prevents an analyst, defending some computer(s) in the network, from analyzing the malicious code and identifying the intentions of the malware author. We discuss malware encryption schemes that use environmental encryption keys, generated from some computer(s) the malware author intends to attack, and is able to rerandomize ciphertexts, to make each malware sample in the network indistinguishable. We are interested in hiding the intentions and identity of the malware author, not in hiding the existence of malware.
In this work, we study unconditionally-secure multi-party computation (MPC) tolerating $t < n/3$ corruptions, where $n$ is the total number of parties involved. In this setting, it is well known that if the underlying network is completely asynchronous, then one can achieve only statistical security; moreover it is impossible to ensure input provision and consider inputs of all the honest parties. The best known statistically-secure asynchronous MPC (AMPC) with $t<n/3$ requires a communication of $O(n^5)$ field elements per multiplication. We consider a partially synchronous setting, where the parties are assumed to be globally synchronized initially for few rounds and then the network becomes completely asynchronous. In such a setting, we present a MPC protocol, which requires $O(n^2)$ communication per multiplication while ensuring input provision. Our MPC protocol relies on a new four round, communication efficient statistical verifiable secret-sharing (VSS) protocol with broadcast communication complexity independent of the number of secret-shared values.
We give a first tight security reduction for a conversion from a weakly secure public-key encryption scheme to an IND-CCA-secure key-encapsulation mechanism scheme in the quantum random oracle model. To the best of our knowledge, previous reductions are non-tight as the security levels of the obtained schemes are degraded to at most half or quater of the original security level (Boneh, Dagdelen, Fischlin, Lehmann, Schafner, and Zhandry (CRYPTO 2012), Targhi and Unruh (TCC 2016-B), and Hofheinz, Hövelmanns, and Kiltz (TCC 2017)).
In this paper, we initiate the study of \emph{garbled protocols} --- a generalization of Yao's garbled circuits construction to distributed protocols. More specifically, in a garbled protocol construction, each party can independently generate a garbled protocol component along with pairs of input labels. Additionally, it generates an encoding of its input. The evaluation procedure takes as input the set of all garbled protocol components and the labels corresponding to the input encodings of all parties and outputs the entire transcript of the distributed protocol.

We provide constructions for garbling arbitrary protocols based on standard computational assumptions on bilinear maps (in the common random/reference string model). Next, using garbled protocols we obtain a general compiler that compresses any arbitrary round multiparty secure computation protocol into a two-round UC secure protocol. Previously, two-round multiparty secure computation protocols were only known assuming witness encryption or learning-with errors. Benefiting from our generic approach we also obtain two-round protocols (i) for the setting of random access machines (RAM programs) while keeping the (amortized) communication and computational costs proportional to running times, (ii) making only a black-box use of the underlying group, eliminating the need for any expensive non-black-box group operations and (iii) satisfying semi-honest security in the plain model.

Our results are obtained by a simple but powerful extension of the non-interactive zero-knowledge proof system of Groth, Ostrovsky and Sahai [Journal of ACM, 2012].
ePrint Report Secure Multi-Party Computation in Large Networks Varsha Dani, Valerie King, Mahnush Movahedi, Jared Saia, Mahdi Zamani
We describe scalable protocols for solving the secure multi-party computation (MPC) problem among a significant number of parties. We consider both the synchronous and the asynchronous communication models. In the synchronous setting, our protocol is secure against a static malicious adversary corrupting less than a $1/3$ fraction of the parties. In the asynchronous environment, we allow the adversary to corrupt less than a $1/8$ fraction of parties. For any deterministic function that can be computed by an arithmetic circuit with $m$ gates, both of our protocols require each party to send a number of messages and perform an amount of computation that is $\tilde{O}(m/n + \sqrt n)$. We also show that our protocols provide statistical and universally-composable security.

To achieve our asynchronous MPC result, we define the threshold counting problem and present a distributed protocol to solve it in the asynchronous setting. This protocol is load balanced, with computation, communication and latency complexity of $O(\log{n})$, and can also be used for designing other load-balanced applications in the asynchronous communication model.
In this paper, we propose new classes of trapdoor functions to solve the closest vector problem in lattices. Specifically, we construct lattices based on properties of polynomials for which the closest vector problem is hard to solve unless some trapdoor information is revealed. We thoroughly analyze the security of our proposed functions using state-of-the-art attacks and results on lattice reductions. Finally, we describe how our functions can be used to design quantum-safe encryption schemes with reasonable public key sizes. In particular, our scheme can offer around $106$ bits of security with a public key size of around $6.4$ KB. Our encryption schemes are efficient with respect to key generation, encryption and decryption.
An Order-Revealing Encryption (ORE) scheme gives a public procedure by which two ciphertext can be compared to reveal the order of their underlying plaintexts. The ideal security notion for ORE is that only the order is revealed --- anything else, such as the distance between plaintexts, is hidden. The only known constructions of ORE achieving such ideal security are based on cryptographic multilinear maps, and are currently too impractical for real-world applications. In this work, we give evidence that building ORE from weaker tools may be hard. Indeed, we show black-box separations between ORE and most symmetric-key primitives, as well as public key encryption and anything else implied by generic groups in a black-box way. Thus, any construction of ORE must either (1) achieve weaker notions of security, (2) be based on more complicated cryptographic tools, or (3) require non-black-box techniques. This suggests that any ORE achieving ideal security will likely be somewhat inefficient. Central to our proof is an proof of impossibility for something we call information theoretic ORE, which has connections to tournament graphs and a theorem by Erdos. This impossibility proof will be useful for proving other black box separations for ORE.
11 October 2017
ePrint Report No right to remain silent: Isolating Malicious Mixes Hemi Leibowitz, Ania Piotrowska, George Danezis, Amir Herzberg
Mix networks are a key technology to provide network anonymity, used for messaging, voting and private lookups. However, simple mix networks are insecure against malicious mixes, which can drop or delay packets to facilitate traffic analysis attacks. Mix networks with provable robustness address this by using complex and expensive proofs of correct shuffling, which come with a cost and make assumptions that are unnatural to many settings in which mix networks are deployed. We present \sysname, a synchronous mix network mechanism, which is provably secure against malicious mixes -- yet retaining the simplicity, efficiency and practicality of mix network designs. \sysname\ uses first-hand experience of unreliability by mixes and clients, to derive a mix `reputation', and to ensure that each active attack -- including dropping of packets -- results in reduction in the connectivity of the malicious mixes, thus reducing their ability to attack. Besides the practical importance of \sysname itself, our results are applicable to other mix networks designs and anonymous communication, and even unrelated settings in which reputation could provide effective defense against malicious participants.
Asymptotically, the best known algorithms for solving the Shortest Vector Problem (SVP) in a lattice of dimension $n$ are sieve algorithms, which have heuristic complexity estimates ranging from $(4/3)^{n+o(n)}$ down to $(3/2)^{n/2 +o(n)}$ when Locality Sensitive Hashing techniques are used. Sieve algorithms are however outperformed by pruned enumeration algorithms in practice by several orders of magnitudes, despite the larger super-exponential asymptotical complexity $2^{\Theta(n \log n)}$ of the latter. In this work, we show a concrete improvement of sieve-type algorithms. Precisely, we show that a few calls to the sieve algorithm in lattices of dimension less than $n-d$ allows to solve SVP in dimension $n$, where $d = \Theta(n/\log n)$.

Although our improvement is only sub-exponential, its practical effect in relevant dimensions is quite significant. We implemented it over a simple sieve algorithm with $(4/3)^{n+o(n)}$ complexity, and it outperforms the best sieve algorithms from the literature by a factor 10 in dimensions 70-80. It performs less than an order of magnitude slower than pruned enumeration in the same range.

By design, this improvement can also be applied to most other variants of sieve algorithms, including LSH sieve algorithms and tuple-sieve algorithms. In this light, we may expect sieve-techniques to outperform pruned enumeration in practice in the near future.
Logic encryption is an important hardware protection technique that adds extra keys to lock a given circuit. With recent discovery of the effective SAT-based attack, new enhancement methods such as SARLock and Anti-SAT have been proposed to thwart the SAT-based and similar exact attacks. Since these new techniques all have very low error rate, approximate attacks such as Double DIP and AppSAT have been proposed to find an almost correct key with low error rate. However, measuring the performance of an approximate attack is extremely challenging, since exact computation of the error rate is very expensive, while estimation based on random sampling has low confidence. In this paper, we develop a suite of scientific encryption benchmarks where a wide range of error rates are possible and the error rate can be found out by simple eyeballing. Then, we conduct a thorough comparative study on different approximate attacks, including AppSAT and Double DIP. The results show that approximate attacks are far away from closing the gap and more investigations are needed in this area.
ePrint Report Hash Proof Systems over Lattices Revisited Fabrice Benhamouda, Olivier Blazy, Léo Ducas, Willy Quach
Hash Proof Systems or Smooth Projective Hash Functions (SPHFs) are a form of implicit arguments introduced by Cramer and Shoup at Eurocrypt'02. They have found many applications since then, in particular for authenticated key exchange or honest-verifier zero-knowledge proofs. While they are relatively well understood in group settings, they seem painful to construct directly in the lattice setting.

Only one construction of an SPHF over lattices has been proposed in the standard model, by Katz and Vaikuntanathan at Asiacrypt'09. But this construction has an important drawback: it only works for an ad-hoc language of ciphertexts. Concretely, the corresponding decryption procedure needs to be tweaked, now requiring $q$ many trapdoor inversion attempts, where $q$ is the modulus of the underlying Learning With Errors (LWE) problem.

Using harmonic analysis, we explain the source of this limitation, and propose a way around it. We show how to construct SPHFs for standard languages of LWE ciphertexts, and explicit our construction over a tag-IND-CCA2 encryption scheme à la Micciancio-Peikert (Eurocrypt'12). We then improve our construction and our analysis in the case where the tag is known in advance or fixed (in the latter case, the scheme is only IND-CPA) with a super-polynomial modulus, to get a stronger type of SPHF, which was never achieved before for any language over lattices.

Finally, we conclude with applications of these SPHFs: password-based authenticated key exchange, honest-verifier zero-knowledge proofs, and a relaxed version of witness encryption.
ePrint Report Large FHE gates from Tensored Homomorphic Accumulator Guillaume Bonnoron, Léo Ducas, Max Fillinger
The main bottleneck of all known Fully Homomorphic Encryption schemes lies in the bootstrapping procedure invented by Gentry (STOC'09). The cost of this procedure can be mitigated either using Homomorphic SIMD techniques, or by performing larger computation per bootstrapping procedure.

In this work, we propose new techniques allowing to perform more operations per bootstrapping in FHEW-type schemes (EUROCRYPT'13). While maintaining the quasi-quadratic $\tilde O(n^2)$ complexity of the whole cycle, our new scheme allows to evaluate gates with $\Omega(\log n)$ input bits, which constitutes a quasi-linear speed-up. Our scheme is also very well adapted to large threshold gates, natively admitting up to $\Omega(n)$ inputs. This could be helpful for homomorphic evaluation of neural networks.

Our theoretical contribution is backed by a preliminary prototype implementation, which can perform $6$-to-$6$ bit gates in less than $10$ seconds on a single core, as well as threshold gates over $63$ input bits even faster.
ePrint Report A signature scheme from Learning with Truncation Jeffrey Hoffstein, Jill Pipher, William Whyte, Zhenfei Zhang
In this paper we revisit the modular lattice signature scheme and its efficient instantiation known as pqNTRUSign. First, we show that a modular lattice signature scheme can be based on a standard lattice problem. As the fundamental problem that needs to be solved by the signer or a potential forger is recovering a lattice vector with a restricted norm, given the least significant bits, we refer to this general class of problems as the ``learning with truncation" problem.

We show that by replacing the uniform sampling in pqNTRUSign with a bimodal Gaussian sampling, we can further reduce the size of a signature. As an example, we show that the size of the signature can be as low as 4608 bits for a security level of 128 bits.

The most significant new contribution, enabled by this Gaussian sampling version of pqNTRUSign, is that we can now perform batch verification, which allows the verifier to check approximately 2000 signatures in a single verification process.
Multidimensional linear cryptanalysis of block ciphers is improved in this work by introducing a number of new ideas. Firstly, formulae is given to compute approximate multidimensional distributions of encryption internal bits. Conventional statistics like LLR(Logarithmic Likelihood Ratio) do not fit to work in Matsui's Algorithm 2 for large dimension data, as the observation depend on too many cipher key bits. So, secondly, a new statistic which reflects the structure of the cipher round is constructed instead. Thirdly, computing the statistic values which fall into a critical region is presented as an optimisation problem for which an efficient algorithm is suggested. The algorithm works much faster than brute forcing all relevant key bits to compute the statistic. An attack for 16-round DES was implemented. We got an improvement over Matsui's attack on DES in data and time complexity keeping success probability the same.
ePrint Report A Framework for Efficient Adaptively Secure Composable Oblivious Transfer in the ROM Paulo S. L. M. Barreto, Bernardo David, Rafael Dowsley, Kirill Morozov, Anderson C. A. Nascimento
Oblivious Transfer (OT) is a fundamental cryptographic protocol that finds a number of applications, in particular, as an essential building block for two-party and multi-party computation. We construct a round-optimal (2 rounds) universally composable (UC) protocol for oblivious transfer secure against active adaptive adversaries from any OW-CPA secure public-key encryption scheme with certain properties in the random oracle model (ROM). In terms of computation, our protocol only requires the generation of a public/secret-key pair, two encryption operations and one decryption operation, apart from a few calls to the random oracle. In~terms of communication, our protocol only requires the transfer of one public-key, two ciphertexts, and three binary strings of roughly the same size as the message. Next, we show how to instantiate our construction under the low noise LPN, McEliece, QC-MDPC, LWE, and CDH assumptions. Our instantiations based on the low noise LPN, McEliece, and QC-MDPC assumptions are the first UC-secure OT protocols based on coding assumptions to achieve: 1) adaptive security, 2) optimal round complexity, 3) low communication and computational complexities. Previous results in this setting only achieved static security and used costly cut-and-choose techniques. Our instantiation based on CDH achieves adaptive security at the small cost of communicating only two more group elements as compared to the gap-DH based Simplest OT protocol of Chou and Orlandi (Latincrypt 15), which only achieves static security in the ROM.
ePrint Report Leakage Bounds for Gaussian Side Channels Thomas Unterluggauer, Thomas Korak, Stefan Mangard, Robert Schilling, Luca Benini, Frank Gürkaynak, Michael Muehlberghuber
In recent years, many leakage-resilient schemes have been published. These schemes guarantee security against side-channel attacks given bounded leakage of the underlying primitive. However, it is a challenging task to reliably determine these leakage bounds from physical properties. In this work, we present a novel approach to find reliable leakage bounds for side channels of cryptographic implementations when the input data complexity is limited such as in leakage-resilient schemes. By mapping results from communication theory to the side-channel domain, we show that the channel capacity is the natural upper bound for the mutual information (MI) to be learned from multivariate side-channels with Gaussian noise. It shows that this upper bound is determined by the device-specific signal-to-noise ratio (SNR). We further investigate the case when attackers are capable of measuring the same side-channel leakage multiple times and perform signal averaging. Our results here indicate that the gain in the SNR obtained from averaging is exponential in the number of points of interest that are used from the leakage traces. Based on this, we illustrate how the side-channel capacity gives a tool to compute the minimum attack complexity to learn a certain amount of information from side-channel leakage. We then show that our MI bounds match with reality by evaluating the MI in multivariate Gaussian templates built from practical measurements on an ASIC. We finally use our model to show the security of the Keccak-f[400]-based authenticated encryption scheme ISAP on this ASIC against power analysis attacks.
Code update is a very useful tool commonly used in low-end embedded devices to improve the existing functionalities or patch discovered bugs or vulnerabilities. If the update protocol itself is not secure, it will only bring new threats to embedded systems. Thus, a secure code update mechanism is required. However, existing solutions either rely on strong security assumptions, or result in considerable storage and computation consumption, which are not practical for resource-constrained embedded devices (e.g., in the context of Internet of Things). In this work, we first propose to use intrinsic device characteristics (i.e., Physically Unclonable Functions or PUF) to design a practical and lightweight secure code update scheme. Our scheme can not only ensure the freshness, integrity, confidentiality and authenticity of code update, but also verify that the update is installed correctly on a specific device without any malicious software. Cloned or counterfeit devices can be excluded as the code update is bound to the unpredictable physical properties of underlying hardware. Legitimate devices in an untrustworthy software state can be restored by filling suspect memory with PUF-derived random numbers. After update installation, the initiator of the code update is able to obtain the verifiable software state from device, and the device can maintain a sustainable post-update secure check by enforcing a secure call sequence. To demonstrate the practicality and feasibility, we also implement the proposed scheme on a low-end MCU platform (TI MSP430) by using onboard SRAM and Flash resources.
Nonlinear permutations (S-boxes) are key components in block ciphers. Differential branch number measures the diffusion power of a permutation. Differential branch number of nonlinear permutations of $\mathbb{F}_2^n$ has not been analyzed, although it is well studied for linear permutations. In this paper we obtain a bound on differential branch number of permutations (both linear and nonlinear) of $\mathbb{F}_2^n$. We also show that in case of $\mathbb{F}_2^4$, the maximum differential branch number can be achieved only by affine permutations.
ePrint Report Decentralized Multi-Client Functional Encryption for Inner Product Jérémy Chotard, Edouard Dufour Sans, Duong Hieu Phan, David Pointcheval
Multi-input functional encryption is a very useful generalization of Functional Encryption, which has been motivated by Goldwasser et al. from Eurocrypt ’14. All the constructions, however, rely on non-standard assumptions. Very recently, at Eurocrypt ’17, Abdalla et al. considered a restricted case and proposed an efficient multi-input inner-product functional encryption scheme. In this paper, regarding the case of inner product, we argue that the multi-client setting (MCFE, for Multi-Client Functional Encryption), which borrows techniques from both Functional Encryption and Private Stream Aggregation, is better suited to real-life applications because of the strong restrictions implied by linear relations. We then propose a practical solution for Multi-Client Inner-Product Functional Encryption (IP-MCFE) which relies on the sole DDH assumption and supports adaptive corruptions. In MCFE schemes, each data input is encrypted by a different client, and the clients might not trust anybody for the functional decryption keys. It thus seems quite important to remove any authority, while allowing corruptions of the clients by the adversary. We thus propose the notion of Decentralized Multi-Client Functional Encryption (DMCFE) and provide a generic construction from two MCFE schemes with particular properties. More concretely, combining two instantiations of our previous IP-MCFE, we can build an efficient and non-interactive decentralized scheme for inner product. Our construction relies on the SXDH assumption, and supports adaptive corruptions in the random oracle model.
Lattice-based cryptography has been accepted as a promising candidate for public key cryptography in the age of quantum computing. Discrete Gaussian sampling is one of fundamental operations in many lattice-based cryptosystems. In this paper, we discuss a sub-problem of discrete Gaussian sampling, which is to sample from a centered discrete Gaussian distribution over the integers with positive standard deviation and zero center. We propose three alternative rejection sampling algorithms for centered discrete Gaussian distributions with standard deviation in two specific forms. The first algorithm is designed for the case where the standard deviation is an positive integer, and it requires neither pre-computation storage nor floating-point arithmetic. While the other two algorithms are fit for a standard deviation that is an integer multiple of a fixed real number (approximately equal to 0.849). These two algorithms require fixed look-up tables of very small size (e.g. 128 bits and 320 bits respectively), but they are much more efficient than the first algorithm. The experimental results show that our algorithms have better performance than that of two rejection sampling algorithms proposed by Karney in 2016 and by Ducas et al.\ in 2013 respectively. The expected numbers of random bits used in our algorithms are significantly smaller than that of random bits used in Karney's rejection sampling algorithm.

  older items