International Association for Cryptologic Research

# IACR News Central

Get an update on changes of the IACR web-page here. For questions, contact newsletter (at) iacr.org. You can also get this service via

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

2013-05-28
05:22 [Pub][ePrint]

Software attestation has become a popular and challenging research topic at many established security conferences with an expected strong impact in practice. It aims at verifying the software integrity of (typically) resource-constrained embedded devices. However, for practical reasons, software attestation cannot rely on stored cryptographic secrets or dedicated trusted hardware. Instead, it exploits side-channel information, such as the time that the underlying device needs for a specific computation. As traditional cryptographic solutions and arguments are not applicable, novel approaches for the design and analysis are necessary. This is certainly one of the main reasons why the security goals, properties and underlying assumptions of existing software attestation schemes have been only vaguely discussed so far, limiting the confidence in their security claims. Thus, putting software attestation on a solid ground and having a founded approach for designing secure software attestation schemes is still an important open problem.

We provide the first steps towards closing this gap. Our first contribution is a security framework that formally captures security goals, attacker models, and various system and design parameters. Moreover, we present a generic software attestation scheme that covers most existing schemes in the literature. Finally, we analyze its security within our framework, yielding sufficient conditions for provably secure software attestation schemes. We expect that such a consolidating work allows for a meaningful security analysis of existing schemes and supports the design of arguably secure software attestation schemes and will inspire new research in this area.

05:22 [Pub][ePrint]

We present several software and hardware implementations of QUAD, a recently introduced stream cipher designed to be provably secure and practical to implement. The software implementations target both a personal computer and an ARM microprocessor. The hardware implementations target field programmable gate arrays. The purpose of our work was to first find the baseline performance of QUAD implementations, then to optimize our implementations for throughput. Our software implementations perform comparably to prior work. Our hardware implementations are the first known implementations to use random coefficients, in agreement with QUAD\'s security argument, and achieve much higher throughput than prior implementations.

05:22 [Pub][ePrint]

Non-malleable codes provide a useful and meaningful security guarantee in situations where traditional error-correction (and even error-detection) is impossible; for example, when the attacker can completely overwrite the encoded message. Informally, a code is non-malleable if the message contained in a modified codeword is either the original message, or a completely unrelated value. Although such codes do not exist if the family of \"tampering functions\" \\cF is completely unrestricted, they are known to exist for many broad tampering families \\cF. One such natural family is the family of tampering functions in the so called {\\em split-state} model. Here the message m is encoded into two shares L and R, and the attacker is allowed to arbitrarily tamper with L and R {\\em individually}. The split-state tampering arises in many realistic applications, such as the design of non-malleable secret sharing schemes, motivating the question of designing efficient non-malleable codes in this model.

Prior to this work, non-malleable codes in the split-state model received considerable attention in the literature, but were either (1) constructed in the random oracle model [DPW10], or (2) relied on advanced cryptographic assumptions (such as non-interactive zero-knowledge proofs and leakage-resilient encryption) [LL12], or (3) could only encode 1-bit messages [DKO13]. As our main result, we build the first efficient, multi-bit, information-theoretically-secure non-malleable code in the split-state model.

The heart of our construction uses the following new property of the inner-product function over the vector space F_p^n (for any prime p and large enough dimension n): if L and R are uniformly random over F_p^n, and f,g: F_p^n \\rightarrow F_p^n are two arbitrary functions on L and R, the joint distribution (,) is close\'\' to the convex combination of \"affine distributions\" {(U,c U+d)| c,d \\in F_p}, where U is uniformly random in F_p. In turn, the proof of this surprising property of the inner product function critically relies on some results from additive combinatorics, including the so called {\\em Quasi-polynomial Freiman-Ruzsa Theorem} (which was recently established by Sanders [San12] as a step towards resolving the Polynomial Freiman-Ruzsa conjecture [Gre05]).

05:22 [Pub][ePrint]

We revisit optical physical unclonable functions (PUFs), which were

proposed by Pappu et al. in their seminal first publication on PUFs

[40, 41]. The first part of the paper treats non-integrated optical

PUFs. Their security against modeling attacks is analyzed, and we

discuss new image transformations that maximize the PUF\'s out-

put entropy while possessing similar error correction capacities as

previous approaches [40, 41]. Furthermore, the influence of us-

ing more than one laser beam, varying laser diameters, and smaller

scatterer sizes is systematically studied. Our findings enable the

simple enhancement of an optical PUF\'s security without addi-

tional hardware costs. Next, we discuss the novel application of

non-integrated optical PUFs as so-called \"Certifiable PUFs\". The

latter are useful to achieve practical security in advanced PUF-pro-

tocols, as recently observed by Rührmair and van Dijk at Oakland

2013 [48]. Our technique is the first mechanism for Certifiable

PUFs in the literature, answering an open problem posed in [48].

In the second part of the paper, we turn to integrated optical

PUFs. We build the first prototype of an integrated optical PUF

that functions without moving components and investigate its se-

curity. We show that these PUFs can surprisingly be attacked by

machine learning techniques if the employed scattering structure is

linear, and if the raw interference images of the PUF are available

to the adversary. Our result enforces the use of non-linear scattering

structures within integrated PUFs. The quest for suitable materials is identified as a central, but currently open research problem.

Our work makes intensive use of two prototypes of optical PUFs. The

presented integratable optical PUF prototype is, to our knowledge,

the first of its kind in the literature.

05:22 [Pub][ePrint]

A draft call for the CAESAR authenticated-encryption competition adopts an interface that is not aligned with existing definitions in the literature. It is the purpose of this brief note to formalize what we believe to be the intended definitions.

05:22 [Pub][ePrint]

We prove a security theorem without collision-resistance for a class of 1-key hash-function-based MAC schemes that includes HMAC and Envelope MAC. The proof has some advantages over earlier proofs: it is in the uniform model, it uses a weaker related-key assumption, and it covers a broad class of MACs in a single theorem. However, we also explain why our theorem is of doubtful value in assessing the real-world security of these MAC schemes. In addition, we prove a theorem assuming collision-resistance. From these two theorems we conclude that from a provable security standpoint there is little reason to prefer HMAC to Envelope MAC or similar schemes.

05:22 [Pub][ePrint]

Let $N_1=p_1q_1$ and $N_2=p_2q_2$ be two different RSA moduli. Suppose that $p_1=p_2 \\bmod 2^t$ for some $t$, and $q_1$ and $q_2$ are $\\alpha$ bit primes. Then May and Ritzenhofen showed that $N_1$ and $N_2$ can be factored in quadratic time if

\$t \\geq 2\\alpha+3. \$

In this paper, we improve this lower bound on $t$. Namely we prove that $N_1$ and $N_2$ can be factored in quadratic time if

\$t \\geq 2\\alpha+1. \$

Further our simulation result shows that our bound is tight.

05:22 [Pub][ePrint]

We give an introduction to Fully Homomorphic Encryption for mathematicians. Fully Homomorphic Encryption allows untrusted parties to take encrypted data Enc(m_1),...,Enc(m_t) and any efficiently computable function f, and compute an encryption of f(m_1,...,m_t), without knowing or learning the decryption key or the raw data m_1,...,m_t. The problem of how to do this was recently solved by Craig Gentry, using ideas from algebraic number theory and the geometry of numbers. In this paper we discuss some of the history and background, give examples of Fully Homomorphic Encryption schemes, and discuss the hard mathematical problems on which the cryptographic security is based.

05:22 [Pub][ePrint]

DNSSEC deployment is long overdue; however, it

seems to be finally taking off. Recent cache poisoning attacks

motivate protecting DNS, with strong cryptography, rather than

with challenge-response \'defenses\'.

Our goal is to motivate and help correct DNSSEC deployment.

We discuss the state of DNSSEC deployment, obstacles to

present a comprehensive overview of challenges and potential

pitfalls of DNSSEC, well known and less known, including:DNSSEC deployment is long overdue; however, it

seems to be finally taking off. Recent cache poisoning attacks

motivate protecting DNS, with strong cryptography, rather than

with challenge-response \'defenses\'.

Our goal is to motivate and help correct DNSSEC deployment.

We discuss the state of DNSSEC deployment, obstacles to

present a comprehensive overview of challenges and potential

pitfalls of DNSSEC, well known and less known, including:

Vulnerable configurations: we present several DNSSEC configurations,

which are natural and, based on the limited

deployment so far, expected to be popular, yet are vulnerable

to attack. This includes NSEC3 opt-out records and interdomain

referrals (in NS, MX and CNAME records).

Incremental Deployment: we discuss potential for increased

vulnerability due to popular practices of incremental deployment,

and recommend secure practice.

Super-sized Response Challenges: DNSSEC responses include

cryptographic keys and hence are relatively long; we

explain how this extra-long responses cause interoperability

challenges, and can be abused for DoS and even DNS

poisoning. We discuss potential solutions.

Vulnerable configurations: we present several DNSSEC configurations,

which are natural and, based on the limited

deployment so far, expected to be popular, yet are vulnerable

to attack. This includes NSEC3 opt-out records and interdomain

referrals (in NS, MX and CNAME records).

Incremental Deployment: we discuss potential for increased

vulnerability due to popular practices of incremental deployment,

and recommend secure practice.

Super-sized Response Challenges: DNSSEC responses include

cryptographic keys and hence are relatively long; we

explain how this extra-long responses cause interoperability

challenges, and can be abused for DoS and even DNS

poisoning. We discuss potential solutions.

05:22 [Pub][ePrint]

Consider two parties Alice and Bob, who hold private inputs x and y, and wish to compute a function f(x,y) privately in the information theoretic sense; that is, each party should learn nothing beyond f(x,y). However, the communication channel available to them is noisy. This means that the channel can introduce errors in the transmission between the two parties. Moreover, the channel is adversarial in the sense that it knows the protocol that Alice and Bob are running, and maliciously introduces errors to disrupt the communication, subject to some bound on the total number of errors. A fundamental question in this setting is to design a protocol that remains private in the presence of large number of errors.

If Alice and Bob are only interested in computing f(x,y) correctly, and not privately, then quite robust protocols are known that can tolerate a constant fraction of errors. However, none of these solutions is applicable in the setting of privacy, as they inherently leak information about the parties\' inputs. This leads to the question whether we can simultaneously achieve privacy and error-resilience against a constant fraction of errors.

We show that privacy and error-resilience are contradictory goals. In particular, we show that for every constant c > 0, there exists a function f which is privately computable in the error-less setting, but for which no private and correct protocol is resilient against a c-fraction of errors.

05:22 [Pub][ePrint]

The notion of \\emph{zero-knowledge} \\cite{GMR85} is formalized by requiring that for every malicious efficient verifier $V^*$, there exists an efficient simulator $S$ that can reconstruct the view of $V^*$ in a true interaction with the prover, in a way that is indistinguishable to \\emph{every} polynomial-time distinguisher. \\emph{Weak zero-knowledge} weakens this notions by switching the order of the quantifiers and only requires that for every distinguisher $D$, there exists a (potentially different) simulator $S_D$.

In this paper we consider various notions of zero-knowledge, and investigate whether their weak variants are equivalent to their strong variants. Although we show (under complexity assumption) that for the standard notion of zero-knowledge, its weak and strong counterparts are not equivalent, for meaningful variants of the standard notion, the weak and strong counterparts are indeed equivalent. Towards showing these equivalences, we introduce new non-black-box simulation techniques permitting us, for instance, to demonstrate that the classical 2-round graph non-isomorphism protocol of Goldreich-Micali-Wigderson \\cite{GMW91} satisfies a distributional\'\' variant of zero-knowledge.

Our equivalence theorem has other applications beyond the notion of zero-knowledge. For instance, it directly implies the \\emph{dense model theorem} of Reingold et al (STOC \'08), and the leakage lemma of Gentry-Wichs (STOC \'11), and provides a modular and arguably simpler proof of these results (while at the same time recasting these result in the language of zero-knowledge).