*13:17* [Pub][ePrint]
Simplification/complication of the basis of prime Boolean ideal, by Alexander Rostovtsev and Anna Shustrova
Prime Boolean ideal has the basis of the form (x1 + e1, ..., xn + en) that consists of linear binomials. Its variety consists of the point (e1, ..., en). Complication of the basis is changing the simple linear binomials by non-linear polynomials in such a way, that the variety of ideal stays fixed. Simplification of the basis is obtaining the basis that consists of linear binomials from the complicated one that keeps its variety.Since any ideal is a module over the ring of Boolean polynomials, the change of the basis is uniquely determined by invertible matrix over the ring.

Algorithms for invertible simplifying and complicating the basis of Boolean ideal that fixes the size of basis are proposed. Algorithm of simplification optimizes the choose of pairs of polynomials during the Groebner basis computation, and eliminates variables without using resultants.

*19:17* [Pub][ePrint]
Boosting Higher-Order Correlation Attacks by Dimensionality Reduction, by Nicolas Bruneau and Jean-Luc Danger and Sylvain Guilley and Annelie Heuser and Yannick Teglia
Multi-variate side-channel attacks allow to break higher-order masking protections by combining several leakage samples.But how to optimally extract all the information contained in all possible $d$-tuples of points?

In this article, we introduce preprocessing tools that answer this question.

We first show that maximizing the higher-order CPA coefficient is equivalent to finding the maximum of the covariance.

We apply this equivalence to the problem of trace dimensionality reduction by linear combination of its samples.

Then we establish the link between this problem and the Principal Component Analysis. In a second step we present the optimal solution for the problem of maximizing the covariance.

We also theoretically and empirically compare these methods.

We finally apply them on real measurements, publicly available under the DPA Contest v4, to evaluate how the proposed techniques improve the second-order CPA (2O-CPA).

*19:17* [Pub][ePrint]
Trapdoor Computational Fuzzy Extractors, by Charles Herder and Ling Ren and Marten van Dijk and Meng-Day (Mandel) Yu and Srinivas Devadas
We describe a method of cryptographically-secure key extraction from a noisy biometric source. The computational security of our method can be clearly argued through hardness of Learning Parity With Noise (LPN).We use a fuzzy commitment scheme so the extracted key is chosen by definition to have uniformly random bits. The biometric source is used as the noise term in the LPN problem. A key idea in our construction is to use additional `confidence\' information produced by the source for polynomial-time key recovery even under high-noise settings, i.e., $\\Theta(m)$ errors, where $m$ is the number of biometric bits. The confidence information is never exposed and is used as a noise-avoiding trapdoor to exponentially reduce key recovery complexity. Previous computational fuzzy extractors were unable to correct $\\Theta(m)$ errors or would run in exponential time in $m$.

A second key result is that we relax the requirement on the noise in the LPN problem, which relaxes the requirement on the biometric source. Through a reduction argument, we show that in the LPN problem, correlation in the bits generated by the biometric source can be viewed as a bias on the bits, which affects the security parameter, but not the security of the overall construction.

Using a silicon Physical Unclonable Function (PUF) as a concrete example, we show how keys can be extracted securely and efficiently even under extreme environmental variation.

*19:17* [Pub][ePrint]
Fully Secure Self-Updatable Encryption in Prime Order Bilinear Groups, by Pratish Datta and Ratna Dutta and Sourav Mukhopadhyay
In CRYPTO 2012, Sahai et al. raised the concern that in a cloud control system revocation of past keys should also be accompanied by updation of previously generated ciphertexts in order to prevent unread ciphertexts from being read by revoked users. Self-updatable encryption (SUE), introduced by Lee et al. in ASIACRYPT 2013, is a newly developed cryptographic primitive that realizes ciphertext update as an inbuilt functionality and thus improves the efficiency of key revocation and time evolution in cloud management. In SUE, a user can decrypt a ciphertext associated with a specific time if and only if the user possesses a private key corresponding to either thesame time as that of the ciphertext or some future time. Furthermore, a ciphertext attached to a certain time can be updated to a new one attached to a future time using only public information. The SUE schemes available in the literature are either (a) fully secure but developed in a composite order bilinear group setting under highly non-standard assumptions or (b) designed in prime order bilinear groups but only selectively secure. This paper presents the first fully secure SUE scheme in prime order bilinear groups under standard assumptions, namely, the Decisional Linear and the Decisional Bilinear Diffie-Hellman assumptions. As pointed out by Freeman (EUROCRYPT 2010)and Lewko (EUROCRYPT 2012), the communication and storage, as well as, computational efficiency of prime order bilinear groups are much higher compared to that of composite order bilinear groups with an equivalent level of security. Consequently, our SUE scheme is highly cost-effective than the existing fully secure SUE.

*19:17* [Pub][ePrint]
Garbled RAM From One-Way Functions, by Sanjam Garg and Steve Lu and Rafail Ostrovsky and Alessandra Scafuro
Yao\'s garbled circuit construction is a fundamental construction in cryptography and recent efficiency optimizations have brought it much closer to practice. However these constructions work only for circuits and garbling a RAM program involves the inefficient process of first converting it into a circuit. Towards the goal of avoiding this inefficiency, Lu and Ostrovsky (Eurocrypt 2013) introduced the notion of ``garbled RAM\'\' as a method to garble RAM programs directly. It can be seen as a RAM analogue of Yao\'s garbled circuits such that, the size of the garbled program and the time it takes to create and evaluate it, is proportional only to the running time on the RAM program rather than its circuit size.Known realizations of this primitive, either need to rely on strong computational assumptions or do not achieve the aforementioned efficiency (Gentry, Halevi, Lu, Ostrovsky, Raykova and Wichs, EUROCRYPT 2014). In this paper we provide the first construction with strictly poly-logarithmic overhead in both space and time based only on the minimal and necessary assumption that one-way functions exist. Our scheme allows for garbling multiple programs being executed on a persistent database, and has the additional feature that the program garbling is decoupled from the database garbling. This allows a client to provide multiple garbled programs to the server as part of a pre-processing phase and then later determine the order and the inputs on which these programs are to be executed, doing work independent of the running times of the programs itself.

*19:17* [Pub][ePrint]
Public-Coin Differing-Inputs Obfuscation and Its Applications, by Yuval Ishai, Omkant Pandey, Amit Sahai
Differing inputs obfuscation (diO) is a strengthening of indistinguishability obfuscation (iO) that has recently found applications to improving the efficiency and generality of obfuscation, functional encryption, and related primitives. Roughly speaking, a diO scheme ensures that the obfuscations of two efficiently generated programs are indistinguishable not only if the two programs are equivalent, but also if it is hard to find an input on which their outputs differ. The above ``indistinguishability\'\' and ``hardness\'\' conditions should hold even in the presence of an auxiliary input that is generated together with the programs.

The recent works of Boyle and Pass (ePrint 2013) and Garg et al. (Crypto 2014) cast serious doubt on the plausibility of general-purpose diO with respect to general auxiliary inputs. This leaves open the existence of a variant of diO that is plausible, simple, and useful for applications.

We suggest such a diO variant that we call {\\em public-coin} diO. A public-coin diO restricts the original definition of diO by requiring the auxiliary input to be a public random string which is given as input to all relevant algorithms. In contrast to standard diO, we argue that it remains very plausible that current candidate constructions of iO for circuits satisfy the public-coin diO requirement.

We demonstrate the usefulness of the new notion by showing that several applications of diO can be obtained by relying on the public-coin variant instead. These include constructions of {\\em succinct} obfuscation and functional encryption schemes for Turing Machines, where the size of the obfuscated code or keys is essentially independent of the running time and space.

*19:17* [Pub][ePrint]
HaTCh: Hardware Trojan Catcher, by Syed Kamran Haider and Chenglu Jin and Masab Ahmad and Devu Manikantan Shila and Omer Khan and Marten van Dijk
With the growing trend of repeated reuse of existing design components, use of third party IP cores has become a common practice in Electronic Design Automation (EDA) industry. However third party IP cores are known to have potential malwares or hardware trojans which could compromise the security of the whole system.We provide a formal classification of possible hardware trojans according to their different properties and analyze in detail the class coined \\textit{XX-St-D-F}, which is the collection of trojans that (1) use \\textit{\\underline{St}}andard I/O channels to deliver malicious payload, (2) are embedded in an IP core with \\textit{\\underline{D}}eterministic functionality, and (3) are designed to violate the \\textit{\\underline{F}}unctional specification. We provide a hierarchy of subclasses $H_{t,1}\\subseteq H_{t,2}\\subseteq \\ldots \\subseteq H_{t,d} \\subseteq \\ldots \\subseteq \\underset{d \\ge 1}{\\cup} H_{t,d}=H_t \\subseteq \\underset{t \\ge 0}{\\cup} H_t=$\\textit{XX-St-D-F}, where $t$ indicates the number of clock cycles between ``activating/triggering\'\' the trojan and the moment a malicious payload is delivered and where $d$ stands for the number of wires involved in the ``trigger signal\'\'. We show that all \\textit{XX-St-D-F} trojans benchmarked by Trusthub~\\cite{trust_hub} belong to $H_{t,1}$ and we design new trojans that belong to each of the classes $H_{t,d}$.

This demonstrates that the currently found/benchmarked trojans are very likely only the tip of the iceberg.

By using the synthesized netlist description of an IP core as input, we introduce a new tool called HaTCh which learns in a precomputation or ``learning\'\' phase how to add additional ``tagging\'\' circuitry to the IP core such that, as soon as an embedded \\textit{XX-St-D-F} trojan is triggered, the tagging circuitry raises an exception to prevent the trojan from manifesting malicious behavior. We show that HaTCh parameterized for $H_{t,d}$ has zero false negatives among trojans in $H_{t,d}$ and depending on the duration of the precomputation (learning phase) the false positive rate can be designed to approach zero. For a sample among the Trusthub~\\cite{trust_hub} benchmarks in \\textit{XX-St-D-F}, we show a false positive rate of $10^{-5}$ (for $d=1$).

The learning phase of HaTCh has a complexity of $O(d2^d \\cdot |Core| \\ln |Core|)$ where $|Core|$ is the number of wires in the IP core. We show how this complexity can potentially be reduced by considering ``interesting\'\' subsets of $H_{t,d}$, one of which leads to $O(|Core|^3)$ complexity independent of $d$. The tagging circuitry for our sample of benchmarks has area overhead from $0.02\\%$ to $7.63\\%$ for non-pipelined tagging circuitry, and from $0.02\\%$ to $15.25\\%$ for pipelined tagging circuitry which is useful for designs having strict timing constraints.