Here you can see all recent updates to the IACR webpage. These updates are also available:

20
June
2019

ePrint Report
Linear Complexity of A Family of Binary pq2 -periodic Sequences From Euler Quotients
Jingwei Zhang, Shuhong Gao, Chang-An Zhao

We first introduce a family of binary $pq^2$ -periodic sequences based on the Euler quotients modulo $pq$, where $p$ and $q$ are two distinct odd primes and $p$ divides $q - 1$. The minimal polynomials and linear complexities are determined for the proposed sequences provided that $2^{q-1} \not \equiv 1 \pmod q^2$ . The results show that the proposed sequences have high linear complexities.

ePrint Report
Verifying Solutions to LWE with Implications for Concrete Security
Palash Sarkar, Subhadip Singha

A key step in Regev's (2009) reduction of the Discrete Gaussian Sampling (DGS) problem to that of solving the Learning With Errors (LWE) problem
is a statistical test required for verifying possible solutions to the LWE problem. Regev showed that asymptotically, the success probability
of this test is exponentially close to 1. In this work, we work out the concrete details of the success probability and its effect on the
tightness gap of the reduction. For lattice dimensions from 2 to 187149, the tightness gap is determined entirely by the inverse
of the success probability. The actual value of the tightness gap for such lattice dimensions is huge showing that the reduction
cannot be used for choosing parameters for practical cryptosystems.

TRIFLE is a Round 1 candidate of the NIST Lightweight Cryptography Standardization process. In this paper, we present an interesting 1-round iterative differential characteristic of the underlying block cipher TRIFLE-BC used in TRIFLE, which holds with probability of $2^{-3}$. Consequently, it allows to mount distinguishing attack on TRIFLE-BC for up to 43 (out of 50) rounds with data complexity $2^{124}$ and time complexity $2^{124}$. Most importantly, with such an iterative differential characteristic, the forgery attack on TRIFLE can reach up to 21 (out of 50) rounds with data complexity $2^{63}$ and time complexity $2^{63}$. Finally, to achieve key recovery attack on reduced TRIFLE, we construct a differential characteristic covering three blocks by carefully choosing the positions of the iterative differential characteristic. As a result, we can mount key-recovery attack on TRIFLE for up to 11 rounds with data complexity $2^{63}$ and time complexity $2^{104}$. Although the result in this paper cannot threaten the security margin of TRIFLE, we hope it can help further understand the security of TRIFLE.

ePrint Report
A Framework for Universally Composable Oblivious Transfer from One-Round Key-Exchange
Pedro Branco, Jintai Ding, Manuel Goulão, Paulo Mateus

Oblivious transfer is one of the main pillars of modern cryptography and plays a major role as a building block for other more complex cryptographic primitives.
In this work, we present an efficient and versatile framework for oblivious transfer (OT) using one-round key-exchange (ORKE), a special class of key exchange (KE) where only one message is sent from each party to the other. Our contributions can be summarized as follows:

i) We carefully analyze ORKE schemes and introduce new security definitions. Namely, we introduce a new class of ORKE schemes, called Alice-Bob one-round key-exchange (A-B ORKE), and the definitions of message and key indistinguishability.

ii) We show that OT can be obtained from A-B ORKE schemes fulfilling message and key indistinguishability. We accomplish this by designing a new efficient, versatile and universally composable framework for OT in the Random Oracle Model (ROM). The efficiency of the framework presented depends almost exclusively on the efficiency of the A-B ORKE scheme used since all other operations are linear in the security parameter. Universally composable OT schemes in the ROM based on new hardness assumptions can be obtained from instantiating our framework.

Examples are presented using the classical Diffie-Hellman KE, RLWE-based KE and Supersingular Isogeny Diffie-Hellman KE.

i) We carefully analyze ORKE schemes and introduce new security definitions. Namely, we introduce a new class of ORKE schemes, called Alice-Bob one-round key-exchange (A-B ORKE), and the definitions of message and key indistinguishability.

ii) We show that OT can be obtained from A-B ORKE schemes fulfilling message and key indistinguishability. We accomplish this by designing a new efficient, versatile and universally composable framework for OT in the Random Oracle Model (ROM). The efficiency of the framework presented depends almost exclusively on the efficiency of the A-B ORKE scheme used since all other operations are linear in the security parameter. Universally composable OT schemes in the ROM based on new hardness assumptions can be obtained from instantiating our framework.

Examples are presented using the classical Diffie-Hellman KE, RLWE-based KE and Supersingular Isogeny Diffie-Hellman KE.

18
June
2019

Recently, Castryck, Lange, Martindale, Panny, and Renes proposed
CSIDH (pronounced "sea-side") as a candidate post-quantum
"commutative group action." It has attracted much attention and
interest, in part because it enables noninteractive
Diffie-Hellman-like key exchange with quite small
communication. Subsequently, CSIDH has also been used as a foundation
for digital signatures.

In 2003-04, Kuperberg and then Regev gave asymptotically subexponential quantum algorithms for "hidden shift" problems, which can be used to recover the CSIDH secret key from a public key. In 2013, Kuperberg gave a follow-up quantum algorithm called the collimation sieve ("c-sieve" for short), which improves the prior ones, in particular by using exponentially less quantum memory and offering more parameter tradeoffs. While recent works have analyzed the concrete cost of the original algorithms (and variants) against CSIDH, there seems not to have been any consideration of the c-sieve.

This work fills that gap. Specifically, we generalize Kuperberg's collimation sieve to work for arbitrary finite cyclic groups, provide some practical efficiency improvements, give a classical (i.e., non-quantum) simulator, run experiments for a wide range of parameters up to and including the actual CSIDH-512 group order, and concretely quantify the complexity of the c-sieve against CSIDH.

Our main conclusion is that the proposed CSIDH-512 parameters provide relatively little quantum security beyond what is given by the cost of quantumly evaluating the CSIDH group action itself (on a uniform superposition). The cost of key recovery is, for example, only about $2^{16}$ quantum evaluations using $2^{40}$ bits of quantumly accessible classical memory (plus insignificant other resources); moreover, these quantities can be traded off against each other. (This improves upon a recent estimate of $2^{32.5}$ evaluations and $2^{31}$ qubits of quantum memory, for a variant of Kuperberg's original sieve.) Therefore, under the plausible assumption that quantum evaluation does not cost very much more than indicated by a recent "best case" analysis, CSIDH-512 does not achieve the claimed 64 bits of quantum security, and it falls well short of the claimed NIST security level 1 when accounting for the MAXDEPTH restriction.

In 2003-04, Kuperberg and then Regev gave asymptotically subexponential quantum algorithms for "hidden shift" problems, which can be used to recover the CSIDH secret key from a public key. In 2013, Kuperberg gave a follow-up quantum algorithm called the collimation sieve ("c-sieve" for short), which improves the prior ones, in particular by using exponentially less quantum memory and offering more parameter tradeoffs. While recent works have analyzed the concrete cost of the original algorithms (and variants) against CSIDH, there seems not to have been any consideration of the c-sieve.

This work fills that gap. Specifically, we generalize Kuperberg's collimation sieve to work for arbitrary finite cyclic groups, provide some practical efficiency improvements, give a classical (i.e., non-quantum) simulator, run experiments for a wide range of parameters up to and including the actual CSIDH-512 group order, and concretely quantify the complexity of the c-sieve against CSIDH.

Our main conclusion is that the proposed CSIDH-512 parameters provide relatively little quantum security beyond what is given by the cost of quantumly evaluating the CSIDH group action itself (on a uniform superposition). The cost of key recovery is, for example, only about $2^{16}$ quantum evaluations using $2^{40}$ bits of quantumly accessible classical memory (plus insignificant other resources); moreover, these quantities can be traded off against each other. (This improves upon a recent estimate of $2^{32.5}$ evaluations and $2^{31}$ qubits of quantum memory, for a variant of Kuperberg's original sieve.) Therefore, under the plausible assumption that quantum evaluation does not cost very much more than indicated by a recent "best case" analysis, CSIDH-512 does not achieve the claimed 64 bits of quantum security, and it falls well short of the claimed NIST security level 1 when accounting for the MAXDEPTH restriction.

ePrint Report
Breaking Tweakable Enciphering Schemes using Simon's Algorithm
Sebati Ghosh, Palash Sarkar

The threat of the possible advent of quantum computers has motivated the cryptographic
community to search for quantum safe solutions. There have been some works in past few years
showing the vulnerability of symmetric key crypto-systems in the quantum setting. Among these
the works by Kuwakado et al. and Kaplan et al. use the quantum period finding procedure
called Simon’s algorithm to attack several symmetric crypto-systems.
In this work, we use Simon’s algorithm to break six tweakable enciphering schemes (TESs)
in the quantum setting. These are CMC, EME, XCB, TET, AEZ and FAST. All of them have
usual proofs of security in the classical sense. A version of EME and a version of XCB are IEEE
standardised TESs.

ePrint Report
On Deploying Secure Computing Commercially: Private Intersection-Sum Protocols and their Business Applications
Mihaela Ion, Ben Kreuter, Ahmet Erhan Nergiz, Sarvar Patel, Mariana Raykova, Shobhit Saxena, Karn Seth, David Shanahan, Moti Yung

In this work, we describe how to deploy a cryptographic secure computation protocol for routine use in industry. Based on our
experience, we identify major preliminaries and enabling factors which we found to be critical to the successful deployment of such technology as a practical, and uniquely positioned method for solving the task at hand.

The specific technical problem that we tackled is that of computing Private Intersection-Sum. In this setting two parties hold datasets containing user identifiers, and one of the parties additionally has an integer value associated with each of its user identifiers. The parties want to learn (1) the number of users they have in common and (2) the sum of the integer values associated with these users, without revealing any more information about their private inputs. Private Intersection-Sum is not an arbitrary question, but rather arose naturally and was concretely defined based on a given central business need: computing aggregate conversion rate (or effectiveness) of advertising campaigns. This problem has both great practical value and important privacy considerations, and represents a type of analysis that occurs surprisingly commonly.

Among the factors that enabled our deployment, in this work we consider in more depth the technical issue of protocol choice and its performance implications. Specifically, we present a study involving three novel protocols for computing Private Intersection-Sum, which leverage three different basic protocol techniques including Random Oblivious Transfer, encrypted Bloom filters, and Diffie–Hellman style (Pohlig–Hellman specifically) double masking. We compare the three protocols under different instantiations of an additive homomorphic encryption, which is used as a building block in each protocol. We implement our constructions and compare their actual communication and computation overheads. Finally, we analyze the advantages of the DDH-based protocol which make it the solution of choice for our deployment setting.

The specific technical problem that we tackled is that of computing Private Intersection-Sum. In this setting two parties hold datasets containing user identifiers, and one of the parties additionally has an integer value associated with each of its user identifiers. The parties want to learn (1) the number of users they have in common and (2) the sum of the integer values associated with these users, without revealing any more information about their private inputs. Private Intersection-Sum is not an arbitrary question, but rather arose naturally and was concretely defined based on a given central business need: computing aggregate conversion rate (or effectiveness) of advertising campaigns. This problem has both great practical value and important privacy considerations, and represents a type of analysis that occurs surprisingly commonly.

Among the factors that enabled our deployment, in this work we consider in more depth the technical issue of protocol choice and its performance implications. Specifically, we present a study involving three novel protocols for computing Private Intersection-Sum, which leverage three different basic protocol techniques including Random Oblivious Transfer, encrypted Bloom filters, and Diffie–Hellman style (Pohlig–Hellman specifically) double masking. We compare the three protocols under different instantiations of an additive homomorphic encryption, which is used as a building block in each protocol. We implement our constructions and compare their actual communication and computation overheads. Finally, we analyze the advantages of the DDH-based protocol which make it the solution of choice for our deployment setting.

ePrint Report
Neural Network Model Assessment for Side-Channel Analysis
Guilherme Perin, Baris Ege, Lukasz Chmielewski

Leakage assessment of cryptographic implementations with side-channel analysis relies on two important assumptions: leakage model and the number of side-channel traces. In the context of profiled side-channel attacks, having these assumptions correctly defined is a sufficient first step to evaluate the security of a crypto implementation with template attacks. This method assumes that the features (leakages or points of interest) follow a univariate or multi-variate Gaussian distribution for the estimation of the probability density function. When trained machine learning or neural network models are employed as classifiers for profiled attacks, a third assumption must be taken into account that it the correctness of the trained model or learning parameters. It was already proved that convolutional neural networks have advantages for side-channel analysis like bypassing trace misalignments and defeating first-order masking countermeasures in software implementations. However, if this trained model is incorrect and the test classification accuracy is close to random guessing, the correctness of the two first assumptions (number of traces and leakage model) will be insufficient and the security of the target under evaluation can be overestimated. This could lead to wrong conclusions in leakage certifications. One solution to verify if the trained model is acceptable relies on the identifying of input features that the neural network considers as points of interest. In this paper, we implement the assessment of neural network models by using the proposed backward propagation path method. Our method is employed during the profiling phase as a tool to verify what the neural network is learning from side-channel traces and to support the optimization of hyper-parameters. The method is tested against masked AES implementation. One of the main results highlights the importance of L2 regularization for the automated points of interest selection from a neural network.

In this work, we present the rst highly-optimized implementation
of Supersingular Isogeny Key Encapsulation (SIKE) submitted
to NIST's second round of post quantum standardization process,
on 64-bit ARMv8 processors. To the best of our knowledge, this work
is the rst optimized implementation of SIKE round 2 on 64-bit ARM
over SIKEp434 and SIKEp610. The proposed library is explicitly optimized
for these two security levels and provides constant-time implementation
of the SIKE mechanism on ARMv8-powered embedded devices.
We adapt dierent optimization techniques to reduce the total number
of underlying arithmetic operations on the led level. In particular, the
benchmark results on embedded processors equipped with ARM Cortex-
A53@1.536GHz show that the entire SIKE round 2 key encapsulation
mechanism takes only 84 ms at NIST's security level 1. Considering
SIKE's extremely small key size in comparison to other candidates, our
result implies that SIKE is one of the promising candidates for key encapsulation
mechanism on embedded devices in the quantum era.

ePrint Report
Leveraging Linear Decryption: Rate-1 Fully-Homomorphic Encryption and Time-Lock Puzzles
Zvika Brakerski, Nico Döttling, Sanjam Garg, Giulio Malavolta

We show how to combine a fully-homomorphic encryption scheme with linear decryption and a linearly-homomorphic encryption schemes to obtain constructions with new properties. Specifically, we present the following new results.

(1) Rate-1 Fully-Homomorphic Encryption: We construct the first scheme with message-to-ciphertext length ratio (i.e., rate) $1-\sigma$ for $\sigma = o(1)$. Our scheme is based on the hardness of the Learning with Errors (LWE) problem and $\sigma$ is proportional to the noise-to-modulus ratio of the assumption. Our building block is a construction of a new high-rate linearly-homomorphic encryption. One application of this result is the first general-purpose secure function evaluation protocol in the preprocessing model where the communication complexity is within additive factor of the optimal insecure protocol. (2) Fully-Homomorphic Time-Lock Puzzles: We construct the first time-lock puzzle where one can evaluate any function over a set of puzzles without solving them, from standard assumptions. Prior work required the existence of sub-exponentially hard indistinguishability obfuscation.

(1) Rate-1 Fully-Homomorphic Encryption: We construct the first scheme with message-to-ciphertext length ratio (i.e., rate) $1-\sigma$ for $\sigma = o(1)$. Our scheme is based on the hardness of the Learning with Errors (LWE) problem and $\sigma$ is proportional to the noise-to-modulus ratio of the assumption. Our building block is a construction of a new high-rate linearly-homomorphic encryption. One application of this result is the first general-purpose secure function evaluation protocol in the preprocessing model where the communication complexity is within additive factor of the optimal insecure protocol. (2) Fully-Homomorphic Time-Lock Puzzles: We construct the first time-lock puzzle where one can evaluate any function over a set of puzzles without solving them, from standard assumptions. Prior work required the existence of sub-exponentially hard indistinguishability obfuscation.

ePrint Report
The Key is Left under the Mat: On the Inappropriate Security Assumption of Logic Locking Schemes
Mir Tanjidur Rahman, Shahin Tajik, M. Sazadur Rahman, Mark Tehranipoor, Navid Asadizanjani

Logic locking has been proposed as an obfuscation technique to protect outsourced IC designs from Intellectual Property (IP) piracy by untrusted entities in the design and fabrication process. It obfuscates the netlist by adding extra key-gates, to mislead an adversary, whose aim is to reverse engineer the netlist. The correct functionality will be obtained only if a correct key is applied to the key-gates. The key is written into a nonvolatile memory (NVM) after the fabrication by the IP owner.
In the past several years, the focus of the research community has been mostly on Oracle-guided attacks, such as SAT attacks, on logic locking and proposing proper countermeasures against such attacks. However, none of the reported researches in the literature has ever challenged a more fundamental assumption of logic locking, which is the security of the key itself. In other words, if an adversary can read out the correct key after insertion, the security of the entire scheme is broken, no matter how robust is the logic-locking scheme. In this work, we first review possible adversaries for the locked circuits and their capabilities. Afterward, we demonstrate that even with the assumption of having a tamper and read-proof memory for the key storage, which is not vulnerable to any physical attacks, the key transfer between the memory and the key-gates through registers and buffers make the key extraction by an adversary possible. To support our claim, we implemented a proof-of-concept locked circuit as well as one of the common logic locking benchmarks on a 28 nm Flash-based FPGA and extract their keys using optical probing. Finally, we discuss the feasibility of the proposed attack in different scenarios and propose potential countermeasures.

ePrint Report
Improved Building Blocks for Secure Multi-Party Computation based on Secret Sharing with Honest Majority
Marina Blanton, Ahreum Kang, Chen Yuan

Secure multi-party computation permits evaluation of any desired functionality on private data without disclosing the data to the participants and is gaining its popularity due to increasing collection of user, customer, or patient data and the need to analyze data sets distributed across different organizations without disclosing them. Because adoption of secure computation techniques depends on their performance in practice, it is important to continue improving their performance. In this work, we focus on common non-trivial operations used by many types of programs, and any advances in their performance would impact the runtime of programs that rely on them. In particular, we treat the operation of reading or writing an element of an array at a private location and integer multiplication. The focus of this work is secret sharing setting with honest majority in the semi-honest security model. We demonstrate improvement of the proposed techniques over prior constructions via analytical and empirical evaluation.

ePrint Report
Homomorphism learning problems and its applications to public-key cryptography
Christopher Leonardi, Luis Ruiz-Lopez

We present a framework for the study of a learning problem over abstract groups, and introduce a new technique which allows for public-key encryption using generic groups. We proved, however, that in order to obtain a quantum resistant encryption scheme, commutative groups cannot be used to instantiate this protocol.

ePrint Report
On the Quantum Complexity of the Continuous Hidden Subgroup Problem
Koen de Boer, Léo Ducas, Serge Fehr

The Hidden Subgroup Problem (HSP) aims at capturing all problems that are susceptible to be solvable in quantum polynomial time following the blueprints of Shor's celebrated algorithm. Successful solutions to this problems over various commutative groups allow to efficiently perform number-theoretic tasks such as factoring or finding discrete logarithms.

The latest successful generalization (Eisentrager et al. STOC 2014) considers the problem of finding a full-rank lattice as the hidden subgroup of the continuous vector space $\mathbb R^m$, even for large dimensions $m$. It unlocked new cryptanalytic algorithms (Biasse-Song SODA 2016, Cramer et al. EUROCRYPT 2016 and 2017), in particular to find mildly short vectors in ideal lattices.

The cryptanalytic relevance of such a problem raises the question of a more refined and quantitative complexity analysis. In the light of the increasing physical difficulty of maintaining a large entanglement of qubits, the degree of concern may be different whether the above algorithm requires only linearly many qubits or a much larger polynomial amount of qubits.

This is the question we start addressing with this work. We propose a detailed analysis of (a variation of) the aforementioned HSP algorithm, and conclude on its complexity as a function of all the relevant parameters. Incidentally, our work clarifies certain claims from the extended abstract of Eisentrager et al.

The latest successful generalization (Eisentrager et al. STOC 2014) considers the problem of finding a full-rank lattice as the hidden subgroup of the continuous vector space $\mathbb R^m$, even for large dimensions $m$. It unlocked new cryptanalytic algorithms (Biasse-Song SODA 2016, Cramer et al. EUROCRYPT 2016 and 2017), in particular to find mildly short vectors in ideal lattices.

The cryptanalytic relevance of such a problem raises the question of a more refined and quantitative complexity analysis. In the light of the increasing physical difficulty of maintaining a large entanglement of qubits, the degree of concern may be different whether the above algorithm requires only linearly many qubits or a much larger polynomial amount of qubits.

This is the question we start addressing with this work. We propose a detailed analysis of (a variation of) the aforementioned HSP algorithm, and conclude on its complexity as a function of all the relevant parameters. Incidentally, our work clarifies certain claims from the extended abstract of Eisentrager et al.

ePrint Report
On Security of Fiat-Shamir Signatures over Lattice in the Presence of Randomness Leakage
Yuejun Liu, Yongbin Zhou, Shuo Sun, Tianyu Wang, Rui Zhang

Leakage during the signing process, including partial key exposure and partial (or complete) randomness leakage, may be devastating for the security of digital signatures. In this work, we consider the security of lattice-based Fiat-Shamir signatures in the presence of randomness leakage. Based on a connection with the ILWE problem introduced by Bootle et al. at Asiacrypt 2018, we show that the key recovery attack with partial randomness leakage can be reduced to a variant of ILWE (We call it FS-ILWE in this work). The ILWE problem is the problem of recovering the secret vector ${\bf s}$ given polynomially many samples of the form $({\bf a}, \langle {\bf a}, {\bf s} \rangle + e)$ and is proven solvable if the error $e$ is not superpolynomially larger than the inner product $\langle {\bf a}, {\bf s} \rangle$, whereas in the FS-ILWE ${\bf a}$ is a sparse vector with a fixed number of non-zero elements, which is either $1$ or $-1$. With one nice probability property that the expectation and covariance of any two coefficients of ${\bf a}$ are zeros, we show that FS-ILWE can also be solved in polynomial time.

Consequently, many lattice-based Fiat-Shamir signatures can be totally broken with only one bit leakage of randomness per signature. Our attack has been validated by conducting a series of experiments on two efficient NIST PQC submissions, Dilithium and qTESLA. The results indicate that the secret key of Dilithium and qTESLA can be recovered within seconds by running our method on an ordinary PC desktop.

Consequently, many lattice-based Fiat-Shamir signatures can be totally broken with only one bit leakage of randomness per signature. Our attack has been validated by conducting a series of experiments on two efficient NIST PQC submissions, Dilithium and qTESLA. The results indicate that the secret key of Dilithium and qTESLA can be recovered within seconds by running our method on an ordinary PC desktop.

ePrint Report
Generalized Related-Key Rectangle Attacks on Block Ciphers with Linear Key Schedule
Boxin Zhao, Xiaoyang Dong, Willi Meier, Keting Jia, Gaoli Wang

This paper gives a new generalized key-recovery model of related-key rectangle attacks on block ciphers with linear key schedules. The model is quite optimized and applicable to various block ciphers with linear key schedule. As a proof of work, we apply the new model to two very important block ciphers, i.e. SKINNY and GIFT, which are basic modules of many candidates of the Lightweight Cryptography (LWC) standardization project by NIST.

For SKINNY, we reduce the complexity of the best previous 27-round related-tweakey rectangle attack on SKINNY-128-384 from $2^{331}$ to $2^{251.25}$. In addition, the first 28-round related-tweakey rectangle attack on SKINNY-128-384 is given, which gains one more round than before. For the candidate LWC SKINNY AEAD M1, we conduct a 24-round related-tweakey rectangle attack with a time complexity of $2^{123}$ and a data complexity of $2^{123}$ chosen plaintexts. For the case of GIFT-64, we give the first 24-round related-key rectangle attack with a time complexity $2^{91.58}$, while the best previous attack on GIFT-64 only reaches 23 rounds at most.

For SKINNY, we reduce the complexity of the best previous 27-round related-tweakey rectangle attack on SKINNY-128-384 from $2^{331}$ to $2^{251.25}$. In addition, the first 28-round related-tweakey rectangle attack on SKINNY-128-384 is given, which gains one more round than before. For the candidate LWC SKINNY AEAD M1, we conduct a 24-round related-tweakey rectangle attack with a time complexity of $2^{123}$ and a data complexity of $2^{123}$ chosen plaintexts. For the case of GIFT-64, we give the first 24-round related-key rectangle attack with a time complexity $2^{91.58}$, while the best previous attack on GIFT-64 only reaches 23 rounds at most.

Satoshi Nakamoto's Blockchain allows to build publicly verifiable and almost immutable ledgers, but sometimes privacy has to be factored in.

In this work an original protocol is presented that allows sensitive data to be stored on a ledger where its integrity may be publicly verified, but its privacy is preserved and owners can tightly manage the sharing of their information with efficient revocation.

In this work an original protocol is presented that allows sensitive data to be stored on a ledger where its integrity may be publicly verified, but its privacy is preserved and owners can tightly manage the sharing of their information with efficient revocation.

ePrint Report
SimpleENC and SimpleENCsmall -- an Authenticated Encryption Mode for the Lightweight Setting
Shay Gueron, Yehuda Lindell

Block cipher modes of operation provide a way to securely encrypt using a block cipher, and different modes of operation achieve different tradeoffs of security, performance and simplicity. In this paper, we present a new authenticated encryption scheme that is designed for the lightweight cryptography setting, but can be used in standard settings as well. Our mode of encryption is extremely simple, requiring only a single block cipher primitive (in forward direction) and minimal padding, and supports streaming (online encryption). In addition, our mode achieves very strong security bounds, and can even provide good security when the block size is just 64 bits. As such, it is highly suitable for lightweight settings, where the lifetime of the key and/or overall amount encrypted may be high. Our new scheme can be seen as an improved version of CCM that supports streaming, and provides much better bounds.

ePrint Report
SIKE'd Up: Fast and Secure Hardware Architectures for Supersingular Isogeny Key Encapsulation
Brian Koziel, A-Bon Ackie, Rami El Khatib, Reza Azarderakhsh, Mehran Mozaffari-Kermani

In this work, we present a fast parallel architecture to perform supersingular isogeny key encapsulation (SIKE). We propose and implement a fast isogeny accelerator architecture that uses fast and parallelized isogeny formulas. On top of our isogeny accelerator, we build a novel architecture for the SIKE primitive, which provides both quantum and IND-CCA security. Since SIKE can support static keys, we propose and implement additional differential power analysis countermeasures. We synthesized this architecture on the Xilinx Virtex-7 and Kintex UltraScale+ FPGA families. Over Virtex-7 FPGA's, our constant-time implementations are roughly 20% faster than the state-of-the-art with a better area-time product. At the NIST security level 5 on a Kintex UltraScale+ FPGA, we can execute the SIKE protocol in 15.6 ms. This work continues to improve the speed of isogeny-based computations and also features the first full implementation of SIKE, with results applicable to NIST's post-quantum standardization process.

ePrint Report
A Comprehensive Formal Security Analysis and Revision of the Two-phase Key Exchange Primitive of TPM 2.0
Qianying Zhang, Shijun Zhao, Zhiping Shi, Yong Guan, Guohui Wang

The Trusted Platform Module (TPM) version 2.0, which has been demonstrated as a key element of Industry 4.0, presents a two-phase key exchange primitive for secure communications between Industry 4.0 components. The key exchange primitive of TPM 2.0 can be used to implement three widely-standardized authenticated key exchange protocols: the Full Unified Model, the Full MQV, and the SM2 key exchange protocols. However, vulnerabilities have been found in all of these protocols. Fortunately, it seems that the protections offered by TPM chips can mitigate these vulnerabilities. In this paper, we present a security model which captures TPM's protections on keys and protocols' computation environments and in which multiple protocols can be analyzed in a unified way. Based on the unified security model, we give the first formal security analysis of the key exchange primitive of TPM 2.0, and the analysis results show that, with the help of hardware protections of TPM chips, the key exchange primitive indeed satisfies the well-defined security property of our security model, but unfortunately under some impractical limiting conditions, which would prevent the application of the key exchange primitive in real-world networks. To make TPM 2.0 applicable to real-world networks, we present a revision of the key exchange primitive of TPM 2.0, which can keep secure without the limiting conditions. We give a rigorous analysis of our revision, and the results show that our revision achieves not only the basic security property of modern AKE security models but also some further security properties.

One of the main goals of securing data transmission is focused on the security of cloud data storage. In this paper, we describe several cryptographic techniques which can be used to address the relevant threats and security goals for analyzing cloud computing security. Private semi-trusted clouds, allow researchers to design private clouds by using cryptographic techniques, to protect the semi-trusted ones. Finally, we elaborate on semi-trusted clouds which are related to real-world deployments of cloud resources, and how optimizing cryptographic protocols, would indeed lead to the usage of this certain cloud and therefore practical ways of securing this type of data.

ePrint Report
Upper Bounds on the Multiplicative Complexity of Symmetric Boolean Functions
Luís T. A. N. Brandão, Çağdaş Çalık, Meltem Sönmez Turan, René Peralta

A special metric of interest about Boolean functions is multiplicative complexity (MC): the minimum number of AND gates sufficient to implement a function with a Boolean circuit over the basis {XOR, AND, NOT}. In this paper we study the MC of symmetric Boolean functions, whose output is invariant upon reordering of the input variables. Based on the Hamming weight method from Muller and Preparata (1975), we introduce new techniques that yield circuits with fewer AND gates than upper bounded by Boyar et al. in 2000 and by Boyar and Peralta in 2008. We generate circuits for all such functions with up to 25 variables. As a special focus, we report concrete upper bounds for the MC of elementary symmetric functions $\Sigma^n_k$ and counting functions $E^n_k$ with up to n = 25 input variables. In particular, this allows us to answer two questions posed in 2008: both the elementary symmetric $\Sigma^8_4$ and the counting $E^8_4$ functions have MC 6. Furthermore, we show upper bounds for the maximum MC in the class of n-variable symmetric Boolean functions, for each n up to 132.

ePrint Report
Post-Quantum UC-Secure Oblivious Transfer in the Standard Model with Adaptive Corruptions
Olivier Blazy, Céline Chevalier, Quoc Huy Vu

Since the seminal result of Kilian, Oblivious Transfer has proven to be a
fundamental primitive in cryptography. In such a scheme, a user is able
to gain access to an element owned by a server, without learning more than
this single element, and without the server learning which element the user
has accessed. This primitive has received a lot of study in the literature,
among which very few schemes are based on lattices.
The recent NIST call for post-quantum encryption and signature
schemes has revived the interest for cryptographic protocols based on
post-quantum assumptions and the need for a secure post-quantum
oblivious transfer scheme.
In this paper, we show how to construct an oblivious transfer
scheme based on lattices, from a collision-resistant chameleon hash
scheme (CH) and a CCA encryption scheme accepting a smooth projective
hash function (SPHF). Note that our scheme does not rely on random
oracles and provides UC security against adaptive corruptions assuming
reliable erasures.

Oblivious Transfer has played a crucial role in the design of secure multi party computation. Nevertheless, there are not many practical solutions that achieve simulation based security and at the same time instantiable based on different assumptions.

In this work, we show how to construct highly efficient oblivious transfer in the random oracle model that achieves simulation based security under a wide range of assumptions, among them DDH, CDH, LWE and coding based assumptions. We revise classical security notions and propose a new security notion that we call endemic security. We construct an endemically secure oblivious transfer based on DDH that takes only a single communication round which allows significant performance gains over previously known solutions. We also instantiate our oblivious transfer with the Crystals.Kyber key agreement. Our implementation shows that both instantiations can be computed in under one millisecond.

Further, our new security notion also allows us to revisit, correct and improve existing oblivious transfer extension techniques. We provide an implementation of an oblivious transfer extension protocol in the ideal cipher model that is actively secure, processing up to 23 million OTs per second and up to 10 times faster than previous secure implementations. We also show that our framework can compute endemically secure OT extension and the base OTs in just two rounds.

In this work, we show how to construct highly efficient oblivious transfer in the random oracle model that achieves simulation based security under a wide range of assumptions, among them DDH, CDH, LWE and coding based assumptions. We revise classical security notions and propose a new security notion that we call endemic security. We construct an endemically secure oblivious transfer based on DDH that takes only a single communication round which allows significant performance gains over previously known solutions. We also instantiate our oblivious transfer with the Crystals.Kyber key agreement. Our implementation shows that both instantiations can be computed in under one millisecond.

Further, our new security notion also allows us to revisit, correct and improve existing oblivious transfer extension techniques. We provide an implementation of an oblivious transfer extension protocol in the ideal cipher model that is actively secure, processing up to 23 million OTs per second and up to 10 times faster than previous secure implementations. We also show that our framework can compute endemically secure OT extension and the base OTs in just two rounds.

ePrint Report
Commodity-Based 2PC for Arithmetic Circuits
Ivan Damgård, Helene Haagh, Michael Nielsen, Claudio Orlandi

We revisit the framework of Commodity-based Cryptography presented by Beaver (STOC'97) with a focus on updating the framework to fit with modern multiparty computation (MPC) protocols. We study the possibility of replacing the well-known preprocessing model with a commodity-based setting, where a set of independent servers (some of which may be corrupt) provide clients with correlated randomness. From this, the clients then distill correct and secure correlated randomness that they can use during the online phase of the MPC protocol.
Beaver showed how to do OT with semi-honest security in the commodity setting. We improve on Beaver's result as follows: In a model where one of two clients and a constant fraction of the servers may be maliciously corrupted, we obtain unconditionally secure multiplication triples and oblivious linear evaluations (OLEs) such that the amortized communication cost of one triple/OLE is a constant number of field elements (when the field is sufficiently large). We also report on results from an implementation of the OLE protocol. Finally, we suggest an approach to practical realization of a commodity based system where servers need no memory and can be accessed asynchronously by clients, but still a maliciously corrupt client cannot get data he should not have access to.

ePrint Report
Arcula: A Secure Hierarchical Deterministic Wallet for Multi-asset Blockchains
Adriano Di Luzio, Danilo Francati, Giuseppe Ateniese

This work presents Arcula, a new design for hierarchical deterministic wallets that significantly improves the state of the art. Arcula is built on top of provably secure cryptographic primitives. It generates all its cryptographic secrets from a user-provided seed and enables the derivation of new signing public keys without requiring any secret information. Unlike other wallets, it achieves all these properties while being secure against privilege escalation. We prove that an attacker compromising an arbitrary number of users within an Arcula wallet cannot escalate his privileges and compromise users higher in the access hierarchy. Our design works out-of-the-box with any blockchain that enables the verification of signatures on arbitrary messages. We evaluate its usage in a real-world scenario on the Bitcoin Cash network.

14
June
2019

Event date: 4 May to 7 May 2020

Submission deadline: 2 November 2019

Notification: 18 January 2020

Submission deadline: 2 November 2019

Notification: 18 January 2020

13
June
2019

ePrint Report
A Cautionary Note Regarding the Usage of Leakage Detection Tests in Security Evaluation
Carolyn Whitnall, Elisabeth Oswald

An established ingredient in the security evaluation of cryptographic devices is leakage detection, whereby physically observable characteristics such as the power consumption are measured during operation and statistically analysed in search of sensitive data dependencies. However, depending on its precise execution, this approach potentially suffers several drawbacks: a risk of false positives, a difficulty interpreting negative outcomes, and the infeasibility of covering every possible eventuality. Moreover, efforts to mitigate for these drawbacks can be costly with respect to the data complexity of the testing procedures. In this work, we clarify the (varying) goals of leakage detection and assess how well-geared current practice is towards meeting each of those goals. We introduce some new innovations on existing methodologies and make recommendations for best practice. Ultimately, though, we find that many of the obstacles cannot be fully overcome according to existing statistical procedures, so that it remains to be highly cautious and to clearly state the limitations of the methods used when reporting outcomes.

Plantlet is a lightweight stream cipher designed by Mikhalev, Armknecht and M\"{u}ller in IACR ToSC 2017. It has a Grain-like structure with two state registers of size 40 and 61 bits. In spite of this, the cipher does not seem to lose in security against generic Time-Memory-Data Tradeoff attacks due to the novelty of its design. The cipher uses a 80-bit secret key and a 90-bit IV. In this paper, we present a key recovery attack on Plantlet that requires around $2^{76.26}$ Plantlet encryptions. The attack leverages the fact that two internal states of Plantlet that differ in the 43rd LFSR location are guaranteed to produce keystream that are either equal or unequal in 45 locations with probability 1. Thus an attacker can with some probability guess that when 2 segments of keystream blocks possess the 45 bit difference just mentioned, they have been produced by two internal states that differ only in the 43rd LFSR location. Thereafter by solving a system of polynomial equations representing the keystream bits, the attacker can find the secret key if his guess was indeed correct, or reach some kind of contradiction if his guess was incorrect. In the latter event, he would repeat the procedure for other keystream blocks with the given difference. We show that the process when repeated a finite number of times, does indeed yield the value of the secret key.

ePrint Report
Decentralized Multi-authority Anonymous Authentication for Global Identities with Non-interactive Proofs
Hiroaki Anada

We propose a decentralized multi-authority anonymous authentication scheme in which a prover and a verifier are non-interactive. We give two security definitions; resistance against collusion attacks that cause misauthentication, and anonymity for privacy protection. Then we give a construction under a principle of ``commit-to-ID''. We employ two building blocks; the structure-preserving signature scheme and the Groth-Sahai non-interactive proof system, the both of which are based on bilinear groups. We give security proofs in the standard model, which reduce to the security of the building blocks.