International Association for Cryptologic Research

# IACR News Central

Get an update on changes of the IACR web-page here. For questions, contact newsletter (at) iacr.org. You can also get this service via

To receive your credentials via mail again, please click here.

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

2013-05-28
05:22 [Pub][ePrint]

We prove a security theorem without collision-resistance for a class of 1-key hash-function-based MAC schemes that includes HMAC and Envelope MAC. The proof has some advantages over earlier proofs: it is in the uniform model, it uses a weaker related-key assumption, and it covers a broad class of MACs in a single theorem. However, we also explain why our theorem is of doubtful value in assessing the real-world security of these MAC schemes. In addition, we prove a theorem assuming collision-resistance. From these two theorems we conclude that from a provable security standpoint there is little reason to prefer HMAC to Envelope MAC or similar schemes.

05:22 [Pub][ePrint]

Let $N_1=p_1q_1$ and $N_2=p_2q_2$ be two different RSA moduli. Suppose that $p_1=p_2 \\bmod 2^t$ for some $t$, and $q_1$ and $q_2$ are $\\alpha$ bit primes. Then May and Ritzenhofen showed that $N_1$ and $N_2$ can be factored in quadratic time if

\$t \\geq 2\\alpha+3. \$

In this paper, we improve this lower bound on $t$. Namely we prove that $N_1$ and $N_2$ can be factored in quadratic time if

\$t \\geq 2\\alpha+1. \$

Further our simulation result shows that our bound is tight.

05:22 [Pub][ePrint]

We give an introduction to Fully Homomorphic Encryption for mathematicians. Fully Homomorphic Encryption allows untrusted parties to take encrypted data Enc(m_1),...,Enc(m_t) and any efficiently computable function f, and compute an encryption of f(m_1,...,m_t), without knowing or learning the decryption key or the raw data m_1,...,m_t. The problem of how to do this was recently solved by Craig Gentry, using ideas from algebraic number theory and the geometry of numbers. In this paper we discuss some of the history and background, give examples of Fully Homomorphic Encryption schemes, and discuss the hard mathematical problems on which the cryptographic security is based.

05:22 [Pub][ePrint]

DNSSEC deployment is long overdue; however, it

seems to be finally taking off. Recent cache poisoning attacks

motivate protecting DNS, with strong cryptography, rather than

with challenge-response \'defenses\'.

Our goal is to motivate and help correct DNSSEC deployment.

We discuss the state of DNSSEC deployment, obstacles to

adoption and potential ways to increase adoption. We then

present a comprehensive overview of challenges and potential

pitfalls of DNSSEC, well known and less known, including:DNSSEC deployment is long overdue; however, it

seems to be finally taking off. Recent cache poisoning attacks

motivate protecting DNS, with strong cryptography, rather than

with challenge-response \'defenses\'.

Our goal is to motivate and help correct DNSSEC deployment.

We discuss the state of DNSSEC deployment, obstacles to

adoption and potential ways to increase adoption. We then

present a comprehensive overview of challenges and potential

pitfalls of DNSSEC, well known and less known, including:

Vulnerable configurations: we present several DNSSEC configurations,

which are natural and, based on the limited

deployment so far, expected to be popular, yet are vulnerable

to attack. This includes NSEC3 opt-out records and interdomain

referrals (in NS, MX and CNAME records).

Incremental Deployment: we discuss potential for increased

vulnerability due to popular practices of incremental deployment,

and recommend secure practice.

Super-sized Response Challenges: DNSSEC responses include

cryptographic keys and hence are relatively long; we

explain how this extra-long responses cause interoperability

challenges, and can be abused for DoS and even DNS

poisoning. We discuss potential solutions.

Vulnerable configurations: we present several DNSSEC configurations,

which are natural and, based on the limited

deployment so far, expected to be popular, yet are vulnerable

to attack. This includes NSEC3 opt-out records and interdomain

referrals (in NS, MX and CNAME records).

Incremental Deployment: we discuss potential for increased

vulnerability due to popular practices of incremental deployment,

and recommend secure practice.

Super-sized Response Challenges: DNSSEC responses include

cryptographic keys and hence are relatively long; we

explain how this extra-long responses cause interoperability

challenges, and can be abused for DoS and even DNS

poisoning. We discuss potential solutions.

05:22 [Pub][ePrint]

Consider two parties Alice and Bob, who hold private inputs x and y, and wish to compute a function f(x,y) privately in the information theoretic sense; that is, each party should learn nothing beyond f(x,y). However, the communication channel available to them is noisy. This means that the channel can introduce errors in the transmission between the two parties. Moreover, the channel is adversarial in the sense that it knows the protocol that Alice and Bob are running, and maliciously introduces errors to disrupt the communication, subject to some bound on the total number of errors. A fundamental question in this setting is to design a protocol that remains private in the presence of large number of errors.

If Alice and Bob are only interested in computing f(x,y) correctly, and not privately, then quite robust protocols are known that can tolerate a constant fraction of errors. However, none of these solutions is applicable in the setting of privacy, as they inherently leak information about the parties\' inputs. This leads to the question whether we can simultaneously achieve privacy and error-resilience against a constant fraction of errors.

We show that privacy and error-resilience are contradictory goals. In particular, we show that for every constant c > 0, there exists a function f which is privately computable in the error-less setting, but for which no private and correct protocol is resilient against a c-fraction of errors.

05:22 [Pub][ePrint]

The notion of \\emph{zero-knowledge} \\cite{GMR85} is formalized by requiring that for every malicious efficient verifier $V^*$, there exists an efficient simulator $S$ that can reconstruct the view of $V^*$ in a true interaction with the prover, in a way that is indistinguishable to \\emph{every} polynomial-time distinguisher. \\emph{Weak zero-knowledge} weakens this notions by switching the order of the quantifiers and only requires that for every distinguisher $D$, there exists a (potentially different) simulator $S_D$.

In this paper we consider various notions of zero-knowledge, and investigate whether their weak variants are equivalent to their strong variants. Although we show (under complexity assumption) that for the standard notion of zero-knowledge, its weak and strong counterparts are not equivalent, for meaningful variants of the standard notion, the weak and strong counterparts are indeed equivalent. Towards showing these equivalences, we introduce new non-black-box simulation techniques permitting us, for instance, to demonstrate that the classical 2-round graph non-isomorphism protocol of Goldreich-Micali-Wigderson \\cite{GMW91} satisfies a distributional\'\' variant of zero-knowledge.

Our equivalence theorem has other applications beyond the notion of zero-knowledge. For instance, it directly implies the \\emph{dense model theorem} of Reingold et al (STOC \'08), and the leakage lemma of Gentry-Wichs (STOC \'11), and provides a modular and arguably simpler proof of these results (while at the same time recasting these result in the language of zero-knowledge).

05:22 [Pub][ePrint]

We employ physical properties of the real world to design a protocol for secure information transmission where one of the parties is able

to transmit secret information to another party over an insecure channel, without any prior secret arrangements between the parties.

The distinctive feature of this protocol, compared to all known

public-key cryptographic protocols, is that neither party uses a

one-way function. In particular, our protocol is secure against (passive) computationally unbounded adversary.

05:22 [Pub][ePrint]

We propose a general framework to develop fully homomorphic encryption schemes (FHE) without using the Gentry\'s technique. The security relies on the difficulty of solving systems of non-linear equations (which is a $\\mathcal{NP}$-complete problem). While the security of our scheme has not been reduced to a provably hard instance of this problem,

security is globally investigated.

05:22 [Pub][ePrint]

QUAD is a provable secure stream cipher based on multivariate polynomials which was proposed in 2006 by Berbain, Gilbert and Patarin \\cite{BG06}. In this paper we show how to speed up QUAD over GF(256) by a factor of up to 5.8. We get this by using structured systems of polynomials, in particular partially circulant polynomials and polynomials generated by a linear recurring sequence (LRS), instead of random ones. By using this strategy, we can also reduce the system parameter of QUAD by about 99 \\verb!%!. We furthermore present experiments, which seem to show that using structured polynomials of this special choice does not influence the security of QUAD.

05:22 [Pub][ePrint]

In this paper we consider the problem of secret sharing where shares

are encrypted using a public-key encryption (PKE) scheme and

ciphertexts are publicly available. While intuition tells us that the

secret should be protected if the PKE is secure against

chosen-ciphertext attacks (i.e., CCA-secure), formally proving this

reveals some subtle and non-trivial challenges. We isolate the

problems that this raises, and devise a new analysis technique called

plaintext randomization\'\' that can successfully overcome these

challenges, resulting in the desired proof. The encryption of

different shares can use one key or multiple keys, with natural

applications in both scenarios.

05:22 [Pub][ePrint]

Attribute-based encryption (ABE) is a vision of public key encryption that allows users to encrypt and decrypt messages based on user attributes. This functionality comes at a cost. In a typical implementation, the size of the ciphertext is proportional to the number of attributes associated with it and the decryption time is proportional to the number of attributes used during decryption. Specifically, many practical ABE implementations require one pairing operation per attribute used during decryption.

This work focuses on designing ABE schemes with fast decryption algorithms. We restrict our attention to expressive systems without system-wide bounds or limitations, such as placing a limit on the number of attributes used in a ciphertext or a private key. In this setting, we present the first key-policy ABE system where ciphertexts can be decrypted with a constant number of pairings. We show that GPSW ciphertexts can be decrypted with only 2 pairings by increasing the private key size by a factor of X, where X is the set of distinct attributes that appear in the private key. We then present a generalized construction that allows each system user to independently tune various efficiency tradeoffs to their liking on a spectrum where the extremes are GPSW on one end and our very fast scheme on the other. This tuning requires no changes to the public parameters or the encryption algorithm. Strategies for choosing an individualized user optimization plan are discussed. Finally, we discuss how these ideas can be translated into the ciphertext-policy ABE setting at a higher cost.