Get an update on changes of the IACR web-page here. For questions, contact newsletter (at) iacr.org. You can also receive updates via:
To receive your credentials via mail again, please click here.
You can also access the full news archive.
function, both for the honest users as well as for an
adversary, is a useful technique employed for example in
password-based cryptographic schemes to impede brute-force
attacks, and also in so-called proofs of work (used in
protocols like Bitcoin) to show that a certain amount of
computation was performed by a legitimate user. A natural
approach to adjust the complexity of a hash function is to
iterate it $c$~times, for some parameter
$c$, in the hope that any query to the scheme
requires $c$ evaluations of the underlying
hash function. However, results by Dodis et al. (Crypto
2012) imply that plain iteration falls short of achieving
this goal, and designing schemes which provably have such a
desirable property remained an open problem.
This paper formalizes explicitly what it means for a given
scheme to amplify the query complexity of a hash function.
In the random oracle model, the goal of a secure
query-complexity amplifier (QCA) scheme is captured as
transforming, in the sense of indifferentiability, a random
oracle allowing $R$ queries (for the adversary)
into one provably allowing only $r < R$
queries. Turned around, this means that making
$r$ queries to the scheme requires at least
$R$ queries to the actual random oracle. Second,
a new scheme, called collision-free iteration, is proposed and
proven to achieve $c$-fold QCA for both the
honest parties and the adversary, for any fixed
Implementations of cryptosystems are vulnerable to physical attacks, and thus need to be protected against them. Of course, malfunctioning protections are useless. Formal methods help to develop systems while assessing their conformity to a rigorous specification. The first goal of my thesis, and its innovative aspect, is to show that formal methods can be used to prove not only the principle of the countermeasures according to a model, but also their implementation, as it is very where the physical vulnerabilities are exploited. My second goal is the proof and the automation of the protection techniques themselves, because handwritten security code is error-prone.
Physical attacks can be classified into two distinct categories. Passive attacks, where the attacker only reads information that leaks through side-channels. And active attacks, where the attacker tampers with the system to have it reveal secrets through its ``normal'' output channel. Therefore, I have pursued my goals in both settings: on a countermeasure that diminishes side-channel leakage (such as power consumption or electromagnetic emanations), and on countermeasures against fault injection attacks.
As there already exists a rigorous security property for protections against side-channel leakage, my contributions concentrate on formal methods for design and verification of protected implementations of algorithms.
I have developed a methodology to protect an implementation by generating an improved version of it which has a null side-channel signal-to-noise ratio, as its leakage is made constant (in particular, it does not depend on the secret values).
For the sake of demonstration, I have also undertaken to write a tool which automates the application of the methodology on an insecure input code written in assembly language.
Independently, the tool is able to prove that this constant leakage property holds for a given implementation, which can be use[...]
We consider several “provably secure” hash functions that compute simple sums in a well chosen group (G, ?). Security properties of such functions provably translate in a natural way to computational problems in G\r\nthat are simple to define and possibly also hard to solve. Given k disjoint lists Li of group elements, the k-sum problem asks for gi ? Li such that\r\ng1 ? g2 ? . . . ? gk = 1G. Hardness of the problem in the respective groups follows from some “standard” assumptions used in public-key cryptology such\r\nas hardness of integer factoring, discrete logarithms, lattice reduction and syndrome decoding. We point out evidence that the k-sum problem may even be harder than the above problems.\r\n\r\n\r\n
Two hash functions based on the group k-sum problem, SWIFFTX and FSB, were submitted to NIST as candidates for the future SHA-3 standard. Both submissions were supported by some sort of a security proof. We show\r\nthat the assessment of security levels provided in the proposals is not related to the proofs included. The main claims on security are supported exclusively\r\nby considerations about available attacks. By introducing “second-order” bounds on bounds on security, we expose the limits of such an approach to\r\nprovable security.\r\n\r\n
A problem with the way security is quantified does not necessarily mean a problem with security itself. Although FSB does have a history of failures,\r\nrecent versions of the two above functions have resisted cryptanalytic efforts well. This evidence, as well as the several connections to more standard\r\nproblems, suggests that the k-sum problem in some groups may be considered hard on its own and possibly lead to provable bounds on security. Complexity of the non-trivial tree algorithm is becoming a standard tool for measuring the associated hardness.\r\n\r\n\r\n
We propose modifications to the multiplicative Very Smooth Hash and derive security from multiplicative k-sums in contra[...]