On a new fast public key cryptosystem, by Samir Bouftass.
This paper presents a new fast public key cryptosystem namely : A key exchange algorithm, a public key encryption algorithm and a digital signature algorithm, based on a the difficulty to invert the following function :
$F(X) =(A\\times X)Mod(2^r)Div(2^s)$ .\\\\* Mod is modulo operation , Div is integer division operation , A , r and s are known natural numbers while $( r > s )$ .\\\\* In this paper it is also proven that this problem is equivalent to SAT problem which is NP complete .
The SIMON and SPECK Block Ciphers on AVR 8-bit Microcontrollers, by Ray Beaulieu and Douglas Shors and Jason Smith and Stefan Treatman-Clark and Bryan Weeks and Louis Wingers
The last several years have witnessed a surge of activity in
lightweight cryptographic design. Many lightweight block ciphers have
been proposed, targeted mostly at hardware applications. Typically software performance has not been a priority, and consequently software
performance for many of these algorithms is unexceptional. SIMON and
SPECK are lightweight block cipher families developed by the U.S. National Security Agency for high performance in constrained hardware and software environments. In this paper, we discuss software performance and demonstrate how to achieve high performance implementations of SIMON and SPECK on the AVR family of 8-bit microcontrollers. Both ciphers compare favorably to other lightweight block ciphers on this platform. Indeed, SPECK seems to have better overall performance than any existing block cipher --- lightweight or not.
Simplification/complication of the basis of prime Boolean ideal, by Alexander Rostovtsev and Anna Shustrova
Prime Boolean ideal has the basis of the form (x1 + e1, ..., xn + en) that consists of linear binomials. Its variety consists of the point (e1, ..., en). Complication of the basis is changing the simple linear binomials by non-linear polynomials in such a way, that the variety of ideal stays fixed. Simplification of the basis is obtaining the basis that consists of linear binomials from the complicated one that keeps its variety.
Since any ideal is a module over the ring of Boolean polynomials, the change of the basis is uniquely determined by invertible matrix over the ring.
Algorithms for invertible simplifying and complicating the basis of Boolean ideal that fixes the size of basis are proposed. Algorithm of simplification optimizes the choose of pairs of polynomials during the Groebner basis computation, and eliminates variables without using resultants.
Boosting Higher-Order Correlation Attacks by Dimensionality Reduction, by Nicolas Bruneau and Jean-Luc Danger and Sylvain Guilley and Annelie Heuser and Yannick Teglia
Multi-variate side-channel attacks allow to break higher-order masking protections by combining several leakage samples.
But how to optimally extract all the information contained in all possible $d$-tuples of points?
In this article, we introduce preprocessing tools that answer this question.
We first show that maximizing the higher-order CPA coefficient is equivalent to finding the maximum of the covariance.
We apply this equivalence to the problem of trace dimensionality reduction by linear combination of its samples.
Then we establish the link between this problem and the Principal Component Analysis. In a second step we present the optimal solution for the problem of maximizing the covariance.
We also theoretically and empirically compare these methods.
We finally apply them on real measurements, publicly available under the DPA Contest v4, to evaluate how the proposed techniques improve the second-order CPA (2O-CPA).
Trapdoor Computational Fuzzy Extractors, by Charles Herder and Ling Ren and Marten van Dijk and Meng-Day (Mandel) Yu and Srinivas Devadas
We describe a method of cryptographically-secure key extraction from a noisy biometric source. The computational security of our method can be clearly argued through hardness of Learning Parity With Noise (LPN).
We use a fuzzy commitment scheme so the extracted key is chosen by definition to have uniformly random bits. The biometric source is used as the noise term in the LPN problem. A key idea in our construction is to use additional `confidence\' information produced by the source for polynomial-time key recovery even under high-noise settings, i.e., $\\Theta(m)$ errors, where $m$ is the number of biometric bits. The confidence information is never exposed and is used as a noise-avoiding trapdoor to exponentially reduce key recovery complexity. Previous computational fuzzy extractors were unable to correct $\\Theta(m)$ errors or would run in exponential time in $m$.
A second key result is that we relax the requirement on the noise in the LPN problem, which relaxes the requirement on the biometric source. Through a reduction argument, we show that in the LPN problem, correlation in the bits generated by the biometric source can be viewed as a bias on the bits, which affects the security parameter, but not the security of the overall construction.
Using a silicon Physical Unclonable Function (PUF) as a concrete example, we show how keys can be extracted securely and efficiently even under extreme environmental variation.
Fully Secure Self-Updatable Encryption in Prime Order Bilinear Groups, by Pratish Datta and Ratna Dutta and Sourav Mukhopadhyay
In CRYPTO 2012, Sahai et al. raised the concern that in a cloud control system revocation of past keys should also be accompanied by updation of previously generated ciphertexts in order to prevent unread ciphertexts from being read by revoked users. Self-updatable encryption (SUE), introduced by Lee et al. in ASIACRYPT 2013, is a newly developed cryptographic primitive that realizes ciphertext update as an inbuilt functionality and thus improves the efficiency of key revocation and time evolution in cloud management. In SUE, a user can decrypt a ciphertext associated with a specific time if and only if the user possesses a private key corresponding to either the
same time as that of the ciphertext or some future time. Furthermore, a ciphertext attached to a certain time can be updated to a new one attached to a future time using only public information. The SUE schemes available in the literature are either (a) fully secure but developed in a composite order bilinear group setting under highly non-standard assumptions or (b) designed in prime order bilinear groups but only selectively secure. This paper presents the first fully secure SUE scheme in prime order bilinear groups under standard assumptions, namely, the Decisional Linear and the Decisional Bilinear Diffie-Hellman assumptions. As pointed out by Freeman (EUROCRYPT 2010)and Lewko (EUROCRYPT 2012), the communication and storage, as well as, computational efficiency of prime order bilinear groups are much higher compared to that of composite order bilinear groups with an equivalent level of security. Consequently, our SUE scheme is highly cost-effective than the existing fully secure SUE.
Garbled RAM From One-Way Functions, by Sanjam Garg and Steve Lu and Rafail Ostrovsky and Alessandra Scafuro
Yao\'s garbled circuit construction is a fundamental construction in cryptography and recent efficiency optimizations have brought it much closer to practice. However these constructions work only for circuits and garbling a RAM program involves the inefficient process of first converting it into a circuit. Towards the goal of avoiding this inefficiency, Lu and Ostrovsky (Eurocrypt 2013) introduced the notion of ``garbled RAM\'\' as a method to garble RAM programs directly. It can be seen as a RAM analogue of Yao\'s garbled circuits such that, the size of the garbled program and the time it takes to create and evaluate it, is proportional only to the running time on the RAM program rather than its circuit size.
Known realizations of this primitive, either need to rely on strong computational assumptions or do not achieve the aforementioned efficiency (Gentry, Halevi, Lu, Ostrovsky, Raykova and Wichs, EUROCRYPT 2014). In this paper we provide the first construction with strictly poly-logarithmic overhead in both space and time based only on the minimal and necessary assumption that one-way functions exist. Our scheme allows for garbling multiple programs being executed on a persistent database, and has the additional feature that the program garbling is decoupled from the database garbling. This allows a client to provide multiple garbled programs to the server as part of a pre-processing phase and then later determine the order and the inputs on which these programs are to be executed, doing work independent of the running times of the programs itself.