Get an update on changes of the IACR web-page here. For questions, contact newsletter (at) iacr.org. You can also receive updates via:
To receive your credentials via mail again, please click here.
You can also access the full news archive.
turning VIL-ROM schemes into FIL-ROM ones. The benefits we offer over
indifferentiability, the current leading method for this task, are the ability
to handle multi-stage games and greater efficiency. The paradigm consists of
(1) Showing that a VIL UCE function can instantiate the VIL RO in the scheme,
and (2) Constructing the VIL UCE function given a FIL random oracle. The main
technical contributions of the paper are domain extension transforms that
implement the second step. Leveraging known results for the first step we
automatically obtain FIL-ROM constructions for several primitives whose
security notions are underlain by multi-stage games. Our first domain extender
exploits indifferentiability, showing that although the latter does not work
directly for multi-stage games it can be used indirectly, through UCE, as a
tool for this end. Our second domain extender targets performance. It is
parallelizable and shown through implementation to provide significant
performance gains over indifferentiable domain extenders.
Recent progress on ideal lattices has significantly improved the efficiency, and made it possible to implement practical lattice-based cryptography on constrained devices. However, to the best of our knowledge, no previous attempts were made to implement lattice-based schemes on smart cards.
In this paper, we provide the results of our implementation of several state-of-the-art lattice-based authentication protocols on smart cards and a microcontroller widely used in smart cards. Our results show that only a few of the proposed lattice-based authentication protocols can be implemented using limited resources of such constrained devices, however, cutting-edge ones are suitably-efficient to be used practically on smart cards.
Moreover, we have implemented fast Fourier transform (FFT) and discrete Gaussian sampling with different typical parameters sets, as well as versatile lattice-based public-key encryptions. These results have noticeable points which help to design or optimize lattice-based schemes for constrained devices.
defined by Bitanski, Canetti and Halevi (TCC 2012). Our contributions
can be summarized as follows:
For the purpose of secure message transmission, any encryption
protocol with message space $\\cM$ and secret key space $\\cSK$
tolerating poly-logarithmic leakage on the secret state of the
receiver must satisfy $|\\cSK| \\ge (1-\\epsilon)|\\cM|$, for every $0
Interested candidates are invited to submit their application by email to lacs.application AT gmail.com. The application material should contain a cover letter explaining the candidate\\\'s expertise, motivation and research interests, a CV (including photo, information about the obtained degrees, overall GPA in B.Sc. and M.Sc., transcript of grades for relevant courses). We expect proven expertise in your area of research by publications at top conferences, successful participation in competitions and challenges, etc.
obtained through physical attacks such as cold boot and side channel
attacks. Many studies have focused on recovering correct secret keys
from noisy binary data. Obtaining noisy binary keys typically involves
first observing the analog data and then obtaining the binary data
through quantization process that discards much information pertaining
to the correct keys. In this paper, we propose two algorithms for
recovering correct secret keys from noisy analog data, which are
generalized variants of Paterson et al.\'s algorithm. Our algorithms
fully exploit the analog information. More precisely, consider observed
data which follows the Gaussian distribution
with mean $(-1)^b$ and variance $\\sigma^2$ for a secret key bit $b$.
We propose a polynomial time algorithm based on
the maximum likelihood approach and show that it can recover secret keys
if $\\sigma < 1.767$. The first algorithm works only if the noise
distribution is explicitly known. The second algorithm does not need to
know the explicit form of the noise distribution. We implement the first
algorithm and verify its effectiveness.