Tree-Structured Composition of Homomorphic Encryption: How to Weaken Underlying Assumptions, by Koji Nuida and Goichiro Hanaoka and Takahiro Matsuda
Cryptographic primitives based on infinite families of progressively weaker assumptions have been proposed by Hofheinz-Kiltz and by Shacham (the n-Linear assumptions) and by Escala et al. (the Matrix Diffie-Hellman assumptions). All of these assumptions are extensions of the decisional Diffie-Hellman (DDH) assumption. In contrast, in this paper, we construct (additive) homomorphic encryption (HE) schemes based on a new infinite family of assumptions extending the decisional Composite Residuosity (DCR) assumption. This is the first result on a primitive based on an infinite family of progressively weaker assumptions not originating from the DDH assumption. Our assumptions are indexed by rooted trees, and provides a completely different structure compared to the previous extensions of the DDH assumption.
Our construction of a HE scheme is generic; based on a tree structure, we recursively combine copies of building-block HE schemes associated to each leaf of the tree (e.g., the Paillier cryptosystem, for our DCR-based result mentioned above). Our construction for depth-one trees utilizes the \"share-then-encrypt\" multiple encryption paradigm, modified appropriately to ensure security of the resulting HE schemes. We prove several separations between the CPA security of our HE schemes based on different trees; for example, the existence of an adversary capable of breaking all schemes based on depth-one trees, does not imply an adversary against our scheme based on a depth-two tree (within a computational model analogous to the generic group model). Moreover, based on our results, we give an example which reveals a type of \"non-monotonicity\" for security of generic constructions of cryptographic schemes and their building-block primitives; if the building-block primitives for a scheme are replaced with other ones secure under stronger assumptions, it may happen that the resulting scheme becomes secure under a weaker assumption than the original.
PhD students and Postdoctoral Fellowships in Post-Quantum Cryptography, University of Waterloo
The Institute for Quantum Computing and the Centre for Applied Cryptographic Research at the University of Waterloo seek qualified applicants for postdoctoral fellowships and graduate student positions in post-quantum cryptography, in particular in public-key cryptography based on computational assumptions believed to be secure against quantum computers (e.g. systems based on lattices, error-correcting codes codes, multivariate functions, elliptic curve isogenies, and also signature schemes based on hash-functions).
Projects may include studying new attacks (classical or quantum) on proposed systems, improved implementation methods for such systems, and reductions or equivalences between candidate post-quantum systems.
Successful applications will join a broad team of leading researchers in quantum computing and applied cryptography. They will also be able to take advantage of the CryptoWorks21 supplementary training program, which develops the technical and professional skills and knowledge needed to create cryptographic solutions that will be safe in a world with quantum computing technologies.
On a new fast public key cryptosystem, by Samir Bouftass.
This paper presents a new fast public key cryptosystem namely : A key exchange algorithm, a public key encryption algorithm and a digital signature algorithm, based on a the difficulty to invert the following function :
$F(X) =(A\\times X)Mod(2^r)Div(2^s)$ .\\\\* Mod is modulo operation , Div is integer division operation , A , r and s are known natural numbers while $( r > s )$ .\\\\* In this paper it is also proven that this problem is equivalent to SAT problem which is NP complete .
The SIMON and SPECK Block Ciphers on AVR 8-bit Microcontrollers, by Ray Beaulieu and Douglas Shors and Jason Smith and Stefan Treatman-Clark and Bryan Weeks and Louis Wingers
The last several years have witnessed a surge of activity in
lightweight cryptographic design. Many lightweight block ciphers have
been proposed, targeted mostly at hardware applications. Typically software performance has not been a priority, and consequently software
performance for many of these algorithms is unexceptional. SIMON and
SPECK are lightweight block cipher families developed by the U.S. National Security Agency for high performance in constrained hardware and software environments. In this paper, we discuss software performance and demonstrate how to achieve high performance implementations of SIMON and SPECK on the AVR family of 8-bit microcontrollers. Both ciphers compare favorably to other lightweight block ciphers on this platform. Indeed, SPECK seems to have better overall performance than any existing block cipher --- lightweight or not.
Simplification/complication of the basis of prime Boolean ideal, by Alexander Rostovtsev and Anna Shustrova
Prime Boolean ideal has the basis of the form (x1 + e1, ..., xn + en) that consists of linear binomials. Its variety consists of the point (e1, ..., en). Complication of the basis is changing the simple linear binomials by non-linear polynomials in such a way, that the variety of ideal stays fixed. Simplification of the basis is obtaining the basis that consists of linear binomials from the complicated one that keeps its variety.
Since any ideal is a module over the ring of Boolean polynomials, the change of the basis is uniquely determined by invertible matrix over the ring.
Algorithms for invertible simplifying and complicating the basis of Boolean ideal that fixes the size of basis are proposed. Algorithm of simplification optimizes the choose of pairs of polynomials during the Groebner basis computation, and eliminates variables without using resultants.
Boosting Higher-Order Correlation Attacks by Dimensionality Reduction, by Nicolas Bruneau and Jean-Luc Danger and Sylvain Guilley and Annelie Heuser and Yannick Teglia
Multi-variate side-channel attacks allow to break higher-order masking protections by combining several leakage samples.
But how to optimally extract all the information contained in all possible $d$-tuples of points?
In this article, we introduce preprocessing tools that answer this question.
We first show that maximizing the higher-order CPA coefficient is equivalent to finding the maximum of the covariance.
We apply this equivalence to the problem of trace dimensionality reduction by linear combination of its samples.
Then we establish the link between this problem and the Principal Component Analysis. In a second step we present the optimal solution for the problem of maximizing the covariance.
We also theoretically and empirically compare these methods.
We finally apply them on real measurements, publicly available under the DPA Contest v4, to evaluate how the proposed techniques improve the second-order CPA (2O-CPA).