*15:17* [Pub][ePrint]
Advanced Algebraic Attack on Trivium, by Frank Quedenfeld and Christopher Wolf
This paper presents an algebraic attack against Triviumthat breaks 625 rounds using only $4096$ bits of output

in an overall time complexity of $2^{42.2}$ Trivium computations.

While other attacks can do better in terms of rounds ($799$), this is a practical attack with a very low data usage (down from $2^{40}$ output bits) and low computation time (down from $2^{62}$).

From another angle, our attack can be seen as a proof of concept,

how far algebraic attacks can be pushed when several known

techniques are combined into one implementation.

All attacks have been fully implemented and tested; our figures

are therefore not the result of any potentially error-prone extrapolation.

*15:17* [Pub][ePrint]
THE UBERCRYPT FRAMEWORK: A NEW APPROACH IN CRYPTOSYSTEMS, by Joe Chiarella and Greg Mosher and Dr. J. Robert Buchanan
This article describes a novel and unique cryptosystem making use of a small set of private security parameters and public initialization values to produce a pseudorandom byte stream with large period. The byte stream can be used as a one-time stream cipher for securing communication between parties and for data archival. The cryptosystem makes use of geometry and number theory to generate a set of large prime integers and then from the primes a column-periodic matrix of bytes from which further calculation produces a pseudorandom, long period byte stream. The cryptosystem is extensible

in that additional private user-supplied security parameters can supplement the private geometric security parameters while adding strength in the process. The article discusses the design and operation of the system and lists many potential questions of interest to the community of mathematical and cryptological researchers. Foremost among these questions are determining the most appropriate method for assessing the cryptographic strength of the algorithm and determining any weaknesses in the security of the algorithm.

*15:17* [Pub][ePrint]
Analysis of ARX Functions: Pseudo-linear Methods for Approximation, Differentials, and Evaluating Diffusion, by Kerry A. McKay and Poorvi L. Vora
This paper explores the approximation of addition mod $2^n$ by addition mod $2^w$, where $1 \\le w \\le n$, in ARX functions that use large words (e.g., 32-bit words or 64-bit words).Three main areas are explored.

First, \\emph{pseudo-linear approximations} aim to approximate the bits of a $w$-bit window of the state after some rounds.

Second, the methods used in these approximations are also used to construct truncated differentials.

Third, branch number metrics for diffusion are examined for ARX functions with large words, and variants of the differential and linear branch number characteristics based on pseudo-linear methods are introduced.

These variants are called \\emph{effective differential branch number} and \\emph{effective linear branch number}, respectively.

Applications of these approximation, differential, and diffusion evaluation techniques are demonstrated on Threefish-256 and Threefish-512.

*15:17* [Pub][ePrint]
Efficiently Making Secure Two-Party Computation Fair, by Handan Kılınç and Alptekin Küpçü
Secure two-party computation cannot be fair in general against malicious adversaries, unless a trusted third party (TTP) is involved, or gradual-release type of costly protocols with super-constant rounds are employed. Existing optimistic fair two-party computation protocols with constant rounds are either too costly to arbitrate (e.g., the TTP may need to re-do almost the whole computation), or require the use of electronic payments. Furthermore, most of the existing solutions were proven secure and fair separately, which, we show, may lead to insecurity overall.We propose a new framework for fair and secure two-party computation that can be applied on top of any secure two party computation protocol based on Yao\'s garbled circuits. We show that our fairness overhead is minimal, compared to all known existing work. Furthermore, our protocol is fair even in terms of the work performed by Alice and Bob. We also prove our protocol is fair and secure simultaneously, through one simulator, which guarantees that our fairness extensions do not leak any private information. Lastly, we ensure that the TTP never learns the inputs or outputs of the computation. Therefore even if the TTP becomes malicious and causes unfairness, the security of the underlying protocol is still preserved.

*15:17* [Pub][ePrint]
Leveled Fully Homomorphic Signatures from Standard Lattices, by Sergey Gorbunov and Vinod Vaikuntanathan and Daniel Wichs
In a homomorphic signature scheme, a user Alice signs some large dataset $x$ using her secret signing key and uploads the signed data to an untrusted remote server. The server can then run some computation $y=f(x)$ over the signed data and homomorphically derive a short signature $\\sigma_{f,y}$ certifying that $y$ is the correct output of the computation $f$. Anybody can verify the tuple $(f, y, \\sigma_{f,y})$ using Alice\'s public verification key and become convinced of this fact without having to retrieve the entire underlying data.In this work, we construct the first (leveled) fully homomorphic signature schemes that can evaluate arbitrary circuits over signed data. Only the maximal depth $d$ of the circuits needs to be fixed a-priori at setup, and the size of the evaluated signature grows polynomially in $d$, but is otherwise independent of the circuit size or the data size. Our solution is based on the (sub-exponential) hardness of the small integer solution (SIS) problem in standard lattices and satisfies full (adaptive) security. In the standard model, we get a scheme with large public parameters whose size exceeds the total size of a data-set. In the random-oracle model, we get a scheme with short public parameters. In both cases, the schemes can be used to sign many different data-sets. The complexity of verifying a signature for a computation $f$ is at least as large as that of computing $f$, but can be amortized when verifying the same computation over many different data-sets. Furthermore, the signatures can be made context-hiding so as not to reveal anything about the data beyond the outcome of the computation.

These results offer a significant improvement in capabilities and assumptions over the best prior homomorphic signature schemes, which were limited to evaluating polynomials of constant degree.

As a building block of independent interest, we introduce a new notion called homomorphic trapdoor functions (HTDF) which conceptually unites homomorphic encryption and signatures. We construct HTDFs by relying on the techniques developed by Gentry et al. (CRYPTO \'13) and Boneh et al. (EUROCRYPT \'14) in the contexts of fully homomorphic and attribute-based encryptions.

*15:17* [Pub][ePrint]
A key recovery attack to the scale-invariant NTRU-based somewhat homomorphic encryption scheme, by Eduardo Morais and Ricardo Dahab
In this paper we present a key recovery attack to the scale-invariant NTRU-based somewhat homomorphic encryption scheme proposed by Bos et al~\\cite{NTRUbasedFHE} in 2013. The attack allows us to compute the private key for $t>2$ and when the private key is chosen with coefficients in $\\{-1,0,1\\}$. The efficiency of the attack is optimal since it requires just one decryption oracle query, showing that if we don\'t look for this kind of vulnerabilities in homomorphic encryption constructions we are likely to choose insecure parameters. The existence of a key recovery attack means that the scheme is not CCA1-secure. Indeed, almost every somewhat homomorphic construction proposed till now in the literature is vulnerable to this kind of attack, hence our result indicates that building CCA1-secure homomorphic schemes is not trivial.We also provide tables showing how the multiplicative depth is affected when the critical parameter $\\Bkey$ is chosen in order to mitigatte the attack.

*15:17* [Pub][ePrint]
Side Channel Power Analysis of an AES-256 Bootloader, by Colin O\'Flynn and Zhizhang Chen
Side Channel Attacks (SCA) using power measurements are a known method of breaking cryptographic algorithms such as AES. Published research into attacks on AES frequently target only AES-128, and often target only the core Electronic Code-Book (ECB) version, without discussing surrounding issues such as triggering, along with breaking the initialization vector.This paper demonstrates a complete attack on a secure bootloader, where the firmware files have been encrypted with AES-256-CBC. A Correlation Power Analysis (CPA) attack is performed on AES-256 to recover the complete 32-byte key, and a CPA attack is also used to attempt recovery of the initialization vector (IV).

The attack on the IV demonstrates a CPA attack against a single XOR operation, which has relevance to cryptographic functions beyond AES.

*21:17* [Pub][ePrint]
Protecting obfuscation against arithmetic attacks, by Eric Miles and Amit Sahai and Mor Weiss
Recently, the work of Garg et al. (FOCS 2013) gave the first candidate general-purpose obfuscator. This construction is built upon multilinear maps, also called a graded encoding scheme. Several subsequent works have shown that variants of this obfuscator achieves the highest notion of security (VBB security) against \"purely algebraic\" attacks, namely attacks that respect the restrictions of the graded encoding scheme. While important, the scope of these works is somewhat limited due to the strong restrictions imposed on the adversary.We propose and analyze another variant of the Garg et al. obfuscator in a setting that imposes fewer restrictions on the adversary that we call the arithmetic setting. This setting captures a broader class of algebraic attacks than considered in previous works. Most notably, it allows for unlimited additions across different \"levels\" of the encoding. In this setting, we present two results:

- First, in the arithmetic setting where the adversary is limited to creating only multilinear polynomials, we obtain an unconditional proof of VBB security.

- Second, in the arithmetic setting where the adversary can create polynomials of arbitrary degree, we prove VBB security under an assumption that is closely related to the Bounded Speedup Hypothesis of Brakerski and Rothblum (TCC 2014). We also give evidence that any unconditional proof of VBB security in this model would entail proving the algebraic analog of P \\neq NP.

*21:17* [Pub][ePrint]
Sieving for Shortest Vectors in Ideal Lattices: a Practical Perspective, by Joppe W. Bos and Michael Naehrig and Joop van de Pol
Lattice-based cryptography is a promising candidate for providing cryptographic functions that remain secure in a post-quantum era. The security of lattice-based schemes relies on the hardness of lattice problems such as the problem of finding short vectors in integral lattices. In this work, we propose a new variant of the parallel Gauss sieve algorithm which can compute such short vectors. Our algorithm combines favorable properties of previous approaches: all vectors in the global list remain pairwise Gauss reduced (reducing the list size and runtime) while this list is split up between the participating nodes (lowering the memory requirements per node). To demonstrate the benefits of this variant, we present an optimized implementation of our parallel algorithm, using commonly available vector instruction set extensions. We conducted experiments with lattices of dimensions 80, 88, and 96, and the results show that the new approach outperforms the previous parallel Gauss sieve algorithms. The implementation will be made available, and we hope that it will serve as a basis for additional experiments.Almost all recent implementations of lattice-based cryptographic schemes use ideal lattices, and it is known that sieving algorithms can benefit from their additional algebraic structure. Namely, the list size can be reduced by considering vectors along with all their rotations. We show that ideal lattices allow more optimizations, which enhance the performance of sieving algorithms even further. We use the fast Fourier transform (FFT) to speed up the computation of inner products between a vector and the rotations of another. On a conventional, academic computer cluster, our algorithm solved an SVP ideal lattice challenge in a negacyclic ideal lattice of dimension 128 in less than nine days using 1024 cores. This is more than twice as fast as the recent computation by Ishiguro et al. that solved the same challenge on a cluster with the same computer architecture, but without using large shared memory. This indicates that the FFT version can lead to a practical performance increase compared to a naive version using only rotations. Our results shed additional light on the security of schemes which rely on the hardness of computing short vectors in this special setting.