International Association for Cryptologic Research

# IACR News Central

Get an update on changes of the IACR web-page here. For questions, contact newsletter (at) iacr.org. You can also get this service via

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

2012-08-06
15:17 [Pub][ePrint]

We investigate a new class of authenticate codes (A-codes) that support verification

by a group of message recipients in the network coding setting. That is, a sender generates an

A-code over a message such that any intermediate node or recipient can check the authenticity

of the message, typically to detect pollution attacks. We call such an A-code as multi-receiver

homomorphic A-code (MRHA-code). In this paper, we first formally define an MRHA-code.

We then derive some lower bounds on the security parameters and key sizes associated with our

MRHA-codes. Moreover, we give efficient constructions of MRHA-code schemes that can be used

to mitigate pollution attacks on network codes. Unlike prior works on computationally secure

homomorphic signatures and MACs for network coding, our MRHA-codes achieve unconditional

security.

05:51 [Conf][CHES]

CHES 2012 early registration deadline Aug. 5.

2012-08-05
18:38 [News]

The IACR Board is seeking volunteers to assist with the maintenance of the IACR website and to serve as general chair for a future Crypto. If you are interested in these tasks, send a message to president@iacr.org.

18:17 [Pub][ePrint]

In this work we revisit the question of basing cryptography on imperfect randomness. Bosley and Dodis (TCC\'07) showed that if a source of randomness R is \"good enough\" to generate a secret key capable of encrypting k bits, then one can deterministically extract nearly k almost uniform bits from R, suggesting that traditional privacy notions (namely, indistinguishability of encryption) requires an \"extractable\" source of randomness. Other, even stronger impossibility results are known for achieving privacy under specific \"non-extractable\" sources of randomness, such as the gamma-Santha-Vazirani (SV) source, where each next bit has fresh entropy, but is allowed to have a small bias gamma< 1$(possibly depending on prior bits). We ask whether similar negative results also hold for a more recent notion of privacy called differential privacy (Dwork et al., TCC\'06), concentrating, in particular, on achieving differential privacy with the Santha-Vazirani source. We show that the answer is no. Specifically, we give a differentially private mechanism for approximating arbitrary \"low sensitivity\" functions that works even with randomness coming from a gamma-Santha-Vazirani source, for any gamma 18:17 [Pub][ePrint] In this work we focus on a simple database commitment functionality where besides the standard security properties, one would like to hide the size of the input of the sender. Hiding the size of the input of a player is a critical requirement in some applications, and relatively few works have considered it. Notable exceptions are the work on zero-knowledge sets introduced in~\\cite{MRK03}, and recent work on size-hiding private set intersection~\\cite{ADT11}. However, neither of these achieves a secure computation (i.e., a reduction of a real-world attack of a malicious adversary into an ideal-world attack) of the proposed functionality. The first result of this submission consists in defining secure\'\' database commitment and in observing that previous constructions do not satisfy this definition. This leaves open the question of whether there is any way this functionality can be achieved. We then provide an affirmative answer to this question by using new techniques that combined together achieve secure\'\' database commitment. Our construction is in particular optimized to require only a constant number of rounds, to provide non-interactive proofs on the content of the database, and to rely only on the existence of a family of CRHFs. This is the first result where input-size hiding secure computation is achieved for an interesting functionality and moreover we obtain this result with standard security (i.e., simulation in expected polynomial time against fully malicious adversaries, without random oracles, non-black-box extraction assumptions, hardness assumptions against super-polynomial time adversaries, or other controversial/strong assumptions). A key building block in our construction is a universal argument enjoying an improved proof of knowledge property, that we call quasi-knowledge. This property is significantly closer to the standard proof of knowledge property than the weak proof of knowledge property satisfied by previous constructions. 18:17 [Pub][ePrint] Motivated by the question of access control in cloud storage, we consider the problem using Attribute-Based Encryption (ABE) in a setting where users\' credentials may change and ciphertexts may be stored by a third party. We find that a comprehensive solution to our problem must simultaneously allow for the revocation of ABE private keys as well as allow for the ability to update ciphertexts to reflect the most recent updates. Our main result is obtained by pairing two contributions: - Revocable Storage. We ask how a third party can process a ciphertext to disqualify revoked users from accessing data that was encrypted in the past, while the user still had access. In applications, such storage may be with an untrusted entity and as such, we require that the ciphertext management operations can be done without access to any sensitive data (which rules out decryption and re-encryption). We define the problem of revocable storage and provide a fully secure construction. Our core tool is a new procedure that we call ciphertext delegation. One can apply ciphertext delegation on a ciphertext encrypted under a certain access policy to `re-encrypt\' it to a more restrictive policy using only public information. We provide a full analysis of the types of delegation possible in a number of existing ABE schemes. - Protecting Newly Encrypted Data. We consider the problem of ensuring that newly encrypted data is not decryptable by a user\'s key if that user\'s access has been revoked. We give the first method for obtaining this revocation property in a fully secure ABE scheme. We provide a new and simpler approach to this problem that has minimal modications to standard ABE. We identify and define a simple property called piecewise key generation which gives rise to efficient revocation. We build such solutions for Key-Policy and Ciphertext-Policy Attribute-Based Encryption by modifying an existing ABE scheme due to Lewko et al. to satisfy our piecewise property and prove security in the standard model. It is the combination of our two results that gives an approach for revocation. A storage server can update stored ciphertexts to disqualify revoked users from accessing data that was encrypted before the user\'s access was revoked. This is the full version of the Crypto 2012 paper. 18:17 [Pub][ePrint] In this paper, we study the security proofs of GCM (Galois/Counter Mode of Operation). We first point out that a lemma, which is related to the upper bound on the probability of a counter collision, is invalid. Both the original privacy and authenticity proofs by the designers are based on the lemma. We further show that the observation can be translated into a distinguishing attack that invalidates the main part of the privacy proof. It turns out that the original security proofs of GCM contain a flaw, and hence the claimed security bounds are not justified. A very natural question is then whether the proofs can be repaired. We give an affirmative answer to the question by presenting new security bounds, both for privacy and authenticity. As a result, although the security bounds are larger than what were previously claimed, GCM maintains its provable security. We also show that, when the nonce length is restricted to 96 bits, GCM has better security bounds than a general case of variable length nonces. 18:17 [Pub][ePrint] As the most prevailing two-factor authentication mechanism, smart card based password authentication has been a subject of intensive research in the past decade and hundreds of this type of schemes have been proposed. However, most of them were found severely flawed, especially prone to the smart card loss problem, shortly after they were first put forward, no matter the security is heuristically analyzed or formally proved. In SEC\'12, Wang pointed out that, the main cause of this issue is attributed to the lack of an appropriate security model to fully identify the practical threats. To address the issue, Wang presented three kinds of security models, namely Type I, II and III, and further proposed four concrete schemes, only two of which, i.e. PSCAV and PSCAb, are claimed to be secure under the harshest model, i.e. Type III security model. However, in this paper, we demonstrate that PSCAV still cannot achieve the claimed security goals and is vulnerable to an offline password guessing attack and other attacks in the Type III security mode, while PSCAb has several practical pitfalls. As our main contribution, a robust scheme is presented to cope with the aforementioned defects and it is proven to be secure in the random oracle model. Moreover, the analysis demonstrates that our scheme meets all the proposed criteria and eliminates several hard security threats that are difficult to be tackled at the same time in previous scholarship. 18:17 [Pub][ePrint] This paper shows preimage attacks against reduced SHA-1 up to 57 steps. The best previous attack has been presented at CRYPTO 2009 and was for 48 steps finding a two-block preimage with incorrect padding at the cost of 2159.3 evaluations of the compression function. For the same variant our attacks find a one-block preimage at 2150.6 and a correctly padded two-block preimage at 2151.1 evaluations of the compression function. The improved results come out of a differential view on the meet-in-the-middle technique originally developed by Aoki and Sasaki. The new framework closely relates meet-in-the-middle attacks to differential cryptanalysis which turns out to be particularly useful for hash functions with linear message expansion and weak diffusion properties. 18:17 [Pub][ePrint] Adaptively secure multiparty computation is an essential and fundamental notion in cryptography. In this work we focus on the basic question of constructing a multiparty computation protocol secure against a \\emph{malicious}, \\emph{adaptive} adversary in the \\emph{stand-alone} setting without assuming an honest majority, in the plain model. It has been believed that this question can be resolved by composing known protocols from the literature. We show that in fact, this belief is fundamentally mistaken. In particular, we show: \\begin{itemize} \\item[-]\\textbf{Round inefficiency is unavoidable when using black-box simulation:} There does not exist any$o(\\frac{n}{\\log{n}})$round protocol that adaptively securely realizes a (natural)$n\$-party functionality with a black-box simulator. Note that most previously known protocols in the adaptive security setting relied on black-box simulators.

\\item[-]\\textbf{A constant round protocol using non-black-box simulation:} We construct a \\emph{constant round} adaptively secure multiparty computation protocol in a setting without \\emph{honest majority} that makes crucial use of non-black box techniques.

\\end{itemize}

Taken together, these results give the first resolution to the question of adaptively secure multiparty computation protocols with a malicious dishonest majority in the plain model, open since the first formal treatment of adaptive security for multiparty computation in 1996.

18:17 [Pub][ePrint]

Group signatures are a central cryptographic primitive where users

can anonymously and accountably sign messages in the name of a group

they belong to. Several efficient constructions with security proofs

in the standard model ({\\it i.e.}, without the random oracle idealization) appeared in the recent years. However, like standard PKIs, group signatures need an efficient revocation system to be practical.

Despite years of research, membership revocation remains a non-trivial problem: many existing solutions do not scale well due to

either high overhead or constraining operational requirements (like

the need for all users to update their keys after each revocation). Only recently, Libert, Peters and Yung (Eurocrypt\'12) suggested a new

scalable revocation method, based on the Naor-Naor-Lotspiech (NNL)

broadcast encryption framework, that interacts nicely with techniques

for building group signatures in the standard model. While promising, their mechanism introduces important storage requirements at group members. Namely, membership certificates, which used to have constant size in existing standard model constructions, now have polylog size in the maximal cardinality of the group (NNL, after all, is a tree-based technique and such dependency is naturally expected).

In this paper we show how to obtain private keys of {\\it constant}

size. To this end, we introduce a new technique to leverage the NNL

subset cover framework in the context of group signatures but,

perhaps surprisingly, without logarithmic relationship between the

size of private keys and the group cardinality. Namely, we provide a

way for users to efficiently prove their membership of one of the

generic subsets in the NNL subset cover framework. This technique

makes our revocable group signatures competitive with ordinary group

signatures ({\\it i.e.}, without revocation) in the standard model. Moreover, unrevoked members (as in PKIs) still do not need to update their keys at each revocation.