Get an update on changes of the IACR web-page here. For questions, contact newsletter (at) iacr.org. You can also receive updates via:
To receive your credentials via mail again, please click here.
You can also access the full news archive.
After formally defining constrained VRFs, we derive instantiations from the multilinear-maps-based constrained PRFs by Boneh and Waters, yielding a VRF with constrained keys for any set that can be decided by a polynomial-size circuit. Our VRFs have the same function values as the Boneh-Waters PRFs and are proved secure under the same hardness assumption, showing that verifiability comes at no cost. Constrained (functional) VRFs were stated as an open problem by Boyle et al.
leakage-resilient signatures secure against existential forgeries,
where the signature is much shorter than the leakage bound.
Current models of leakage-resilient signatures against existential
forgeries demand that the adversary cannot produce a new valid
message/signature pair $(m, \\sigma)$ even after receiving some
$\\lambda$ bits of leakage on the signing key. If $\\vert \\sigma \\vert
\\le \\lambda$, then the adversary can just choose to leak a valid
signature $\\sigma$, and hence signatures must be larger than the
allowed leakage, which is impractical as the goal often is to have
large signing keys to allow a lot of leakage.
We propose a new notion of leakage-resilient signatures against
existential forgeries where we demand that the adversary cannot
produce $n = \\lfloor \\lambda / \\vert \\sigma \\vert \\rfloor + 1$
distinct valid message/signature pairs
$(m_1, \\sigma_1), \\ldots, (m_n, \\sigma_n)$ after receiving
$\\lambda$ bits of leakage. If $\\lambda =
0$, this is the usual notion of existential unforgeability. If $1
However, PUFs have shown to be vulnerable to model building attacks if the attacker has access to challenge and response pairs. In these model building attacks, machine learning is used to determine the internal parameters of the PUF to build an accurate software model. Nevertheless, PUFs are still a promising building block and several protocols and designs have been proposed that are believed to be resistant against machine learning attacks. In this paper we take a closer look at a two such protocols, one based on reverse fuzzy extractors and one based on pattern matching [15,17]. We show that it is possible to attack these protocols using machine learning despite the fact that an attacker does not have access to direct challenge and response pairs. The introduced attacks demonstrate that even highly obfuscated responses or helper data can be used to attack PUF protocols.
Hence, our work shows that even protocols in which it would be computationally infeasible to compute enough challenge and response pairs for a direct machine learning attack can be attacked using machine learning.
We show how to build puncturable PRFs with adaptive security proofs in the standard model that involve only polynomial loss to the underlying assumptions. Prior work had either super-polynomial loss or applied the random oracle heuristic. Our construction uses indistinguishability obfuscation and DDH-hard algebraic groups of composite order.