*13:17* [Pub][ePrint]
Towards Provably Secure Software Attestation, by Frederik Armknecht and Ahmad-Reza Sadeghi and Steffen Schulz and Christian Wachsmann
Software attestation has become a popular and challenging research topic at many established security conferences. It aims for verifying the software integrity of (typically) resource-constrained embedded devices. However, for practical reasons, software attestation cannot rely on stored cryptographic secrets or dedicated trusted hardware. Instead, it exploits side-channel information, such as the time that the underlying device needs for a specific computation. Unfortunately, traditional cryptographic solutions and arguments are not applicable in this setting, making new approaches for the design and analysis necessary. This is certainly one of the main reasons why the security properties and assumptions of software attestation have been only vaguely discussed and have never been formally treated, as it is common sense in modern cryptography. Thus, despite its popularity and its expected impact for practice, a sound approach for designing secure software attestation schemes is still an important open problem.We introduce the first formal security framework for software attestation and formalize various system and design parameters. Moreover, we present a generic software attestation scheme that captures most existing schemes in the literature. Finally, we analyze its security within our framework, yielding sufficient conditions for provably secure software attestation schemes. We regard these results as a first step towards putting software attestation on a solid ground and as a starting point for further research.

*13:17* [Pub][ePrint]
Security of Quantum-Readout PUFs against quadrature based challenge estimation attacks, by Boris Skoric and Allard P. Mosk and Pepijn W.H. Pinkse
The concept of quantum-secure readout of Physical Unclonable Functions (PUFs) has recently been realized experimentally in an optical PUF system. We analyze the security of this system under the strongest type of classical attack: the challenge estimation attack.The adversary performs a measurement on the challenge quantum state in order to learn as much about it as he can. Using this knowledge he then tries to reconstruct the challenge and to emulate the PUF.

We consider quadrature measurements, which are the most informative practical measurements known to us.

We prove that even under this attack the expected number of photons

detected in the verification mechanism is approximately a factor $S+1$ too low; here $S$ is the Quantum Security Parameter, defined as the number of modes in the optical system divided by the number of photons in the challenge. The photon count allows for a reliable distinction between an authentic PUF and a challenge estimation attack.

*13:17* [Pub][ePrint]
Between a Rock and a Hard Place: Interpolating Between MPC and FHE, by Ashish Choudhury and Jake Loftus and Emmanuela Orsini and Arpita Patra and Nigel P. Smart
We present a computationally secure MPC protocol for thresholdadversaries which is parametrized by a value L. When L=2 we obtain a classical form of MPC protocol in which interaction is required for multiplications, as L increases interaction is reduced in that one requires interaction only after computing a higher degree function. When L approaches infinity one obtains the FHE based protocol of Gentry, which requires no interaction. Thus one can trade communication for computation in a simple way.

Our protocol is based on an interactive protocol for ``bootstrapping\'\' a somewhat homomorphic encryption scheme. The key contribution is that our presented protocol is highly communication efficient enabling us to obtain reduced communication when compared to traditional MPC protocols for relatively small values of L.

*10:17* [Pub][ePrint]
Power Analysis of Hardware Implementations Protected with Secret Sharing, by Guido Bertoni and Joan Daemen and Nicolas Debande and Thanh-Ha Le and Michael Peeters and Gilles Van Assche
We analyze the security of three-share hardware implementations against differential power analysis and advanced variants such as mutual information analysis. We present dedicated distinguishers that allow to recover secret key bits from any cryptographic primitive that is implemented as a sequence of quadratic functions. Starting from the analytical treatment of such distinguishers and information-theoretic arguments, we derive the success probability and required number of traces in the presence of algorithmic noise. We show that attacks onthree-share hardware implementation require a number of traces that scales in the third power of the algorithmic noise variance. Finally, we apply and test our model on Keccak in a keyed mode.

*10:17* [Pub][ePrint]
Why Proving HIBE Systems Secure is Difficult, by Allison Lewko and Brent Waters
Proving security of Hierarchical Identity-Based Encryption (HIBE) andAttribution Based Encryption scheme is a challenging problem. There are multiple well-known schemes in the literature where the best known (adaptive) security proofs degrade exponentially in the maximum

hierarchy depth. However, we do not have a rigorous understanding of

why better proofs are not known. (For ABE, the analog of hierarchy depth is the maximum number of attributes used in a ciphertext.)

In this work, we define a certain commonly found checkability property on ciphertexts and private keys. Roughly the property states that any two different private keys that are both ``supposed to\'\' decrypt a ciphertext will decrypt it to the same message. We show that any simple black box reduction to a non-interactive assumption for a HIBE or ABE system that contains this property will suffer an exponential degradation of security.

*10:17* [Pub][ePrint]
Hardness of SIS and LWE with Small Parameters, by Daniele Micciancio and Chris Peikert
The Short Integer Solution (SIS) and Learning With Errors (LWE) problems are the foundations for countless applications in lattice-based cryptography, and are provably as hard as approximate lattice problems in the worst case. A important question from both a practical and theoretical perspective is how small their parameters can be made, while preserving their hardness.We prove two main results on SIS and LWE with small parameters. For SIS, we show that the problem retains its hardness for moduli $q \\geq \\beta \\cdot n^{\\delta}$ for any constant $\\delta > 0$, where $\\beta$ is the bound on the Euclidean norm of the solution. This improves upon prior results which required $q \\geq \\beta \\cdot \\sqrt{n \\log n}$, and is essentially optimal since the problem is trivially easy for $q \\leq \\beta$. For LWE, we show that it remains hard even when the errors are small (e.g., uniformly random from $\\set{0,1}$), provided that the number of samples is small enough (e.g., linear in the dimension $n$ of the LWE secret). Prior results required the errors to have magnitude at least $\\sqrt{n}$ and to come from a Gaussian-like distribution.

*10:17* [Pub][ePrint]
Related-key Attacks Against Full Hummingbird-2, by Markku-Juhani O. Saarinen
We present attacks on full Hummingbird-2 which are able to recover the 128-bit secret keys of two black box cipher instances that have a certain type of low-weight XOR difference in their keys. We call these

highly correlated keys as they produce the same ciphertext with a

significant probability. The complexity of our main chosen-IV

key-recovery attack is $2^{64}$. The first 64 bits of the key can be independently recovered with only $2^{36}$ effort. This is the first sub-exhaustive attack on the full cipher under two related keys. Our attacks use some novel tricks and techniques which are made possible by Hummingbird-2\'s unique word-based structure. We have verified the correctness and complexity of our attacks by fully implementing them.

We also discuss enabling factors of these attacks and describe an alternative design for the WD16 nonlinear keyed function which is resistant to attacks of this type. The new experimental function replaces S-boxes with simple $\\chi$ functions.

*10:17* [Pub][ePrint]
Relation collection for the Function Field Sieve, by Jérémie Detrey and Pierrick Gaudry and Marion Videau
In this paper, we focus on the relation collection step of the Function Field Sieve (FFS), which is to date the best algorithm known for computing discrete logarithms in small-characteristic finite fields of cryptographic sizes. Denoting such a finite field by GF(p^n), where p is much smaller than n, the main idea behind this step is to find polynomials of the form a(t)-b(t)x in GF(p)[t][x] which, when considered as principal ideals in carefully selected function fields, can be factored into products of low-degree prime ideals. Such polynomials are called \"relations\", and current record-sized discrete-logarithm computations need billions of those.Collecting relations is therefore a crucial and extremely expensive step in FFS, and a practical implementation thereof requires heavy use of cache-aware sieving algorithms, along with efficient polynomial arithmetic over GF(p)[t]. This paper presents the algorithmic and arithmetic techniques which were put together as part of a new public implementation of FFS, aimed at medium- to record-sized computations.