International Association for Cryptologic Research

# IACR News Central

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

2013-05-28
05:22 [Pub][ePrint]

We put forward a new notion, function privacy, in identity-based encryption and, more generally, in functional encryption. Intuitively, our notion asks that decryption keys reveal essentially no information on their corresponding identities, beyond the absolute minimum necessary. This is motivated by the need for providing predicate privacy in public-key searchable encryption. Formalizing such a notion, however, is not straightforward as given a decryption key it is always possible to learn some information on its corresponding identity by testing whether it correctly decrypts ciphertexts that are encrypted for specific identities.

In light of such an inherent difficulty, any meaningful notion of function privacy must be based on the minimal assumption that, from the adversary\'s point of view, identities that correspond to its given decryption keys are sampled from somewhat unpredictable distributions. We show that this assumption is in fact sufficient for obtaining a strong and realistic notion of function privacy. Loosely speaking, our framework requires that a decryption key corresponding to an identity sampled from any sufficiently unpredictable distribution is indistinguishable from a decryption key corresponding to an independently and uniformly sampled identity.

Within our framework we develop an approach for designing function-private identity-based encryption schemes, leading to constructions that are based on standard assumptions in bilinear groups (DBDH, DLIN) and lattices (LWE). In addition to function privacy, our schemes are also anonymous, and thus yield the first public-key searchable encryption schemes that are provably keyword private: A search key sk_w enables to identify encryptions of an underlying keyword w, while not revealing any additional information about w beyond the minimum necessary, as long as the keyword w is sufficiently unpredictable.

05:22 [Pub][ePrint]

Abstract: We present a paper-based voting method that attempts to achieve the privacy of voters and election universal verifiability and integrity with only paper ballots and without using any cryptography method. The voting procedure is easy and it needs only selecting the intention of voter over screen of an electronic device. The rest of the voting procedure will be carried out by the device. Voter gets a receipt that can be used to verify that his vote has been counted in final tally as he intended. However the receipt cannot help voter to reveal who he voted for. Also vote selling or coercion is not possible even with the voter\'s cooperation. The ballot in our voting method has two side, one positive and one negative. Ballots have been prepared for voting in prepackaged form (i.e. 5 ballots per package). Some bubbles of each ballot are prefilled in random way. Numbers of positive and negative filled bubbles are equal with each other and also for each candidate in a package. For example if every package has 30 filled bubbles and if there are three candidates, there would be 10 filled bubbles for each candidate in a package. As it is clear half of those are positive and the other half are negative. The procedure of OneBallot voting is as follows: Voter puts the ballot inside of an electronic device and then he chooses his candidate on the device screen. Then device print another ballot exact same as the original one by one difference; the device fills one positive bubble or unfills one negative bubbles for the selected candidate. First action can be done on the original ballot but the second one needs to print new ballot inevitably. Then device makes a copy from new ballot as voter\'s receipt and transfers original ballot to the ballot box. After election, there will be a copy from all of ballots in a public board (i.e. a website).

05:22 [Pub][ePrint]

In this paper we will prove a basic property of weil pairing which helps in evaluating its value. We will show that the weil pairing value as

computed from the definition is equivalent with the ratio formula based on the miller function. We prove a novel theorem (Theorem 2) and use it

to establish the equivalence. We further validate our claims with actual random examples.

05:22 [Pub][ePrint]

Ristenpart, Shacham and Shrimpton (Eurocrypt 2011) recently presented schemes which are provably secure in the random-oracle model (ROM),

but easily broken if the random oracle is replaced by typical indifferentiable hash constructions such as chop-MD or prefix-free-MD.

They found that the indifferentiability framework, due to Maurer, Renner and Holenstein (TCC 2004), does not

necessarily allow composition in multi-stage settings, that is, settings consisting of multiple disjoint adversarial stages. On the positive

side, they prove that the non-adaptive chosen distribution attack (CDA) game of Bellare et al.~(Asiacrypt 2009), a multi-stage game capturing the security of deterministic encryption schemes,

remains secure if the random oracle is implemented by an NMAC-like hash function.

In this paper we introduce a framework to work with the indifferentiability notion in multi-stage scenarios. For this we provide

a model for iterative hash functions which is general enough to cover not only NMAC-like functions, but also functions such as chop-MD

or even hash trees. We go on to define a property on multi-stage games called \\emph{unsplittability} which intuitively captures that

adversaries cannot split the computation of a single hash value over several stages. We present a composition theorem for

unsplittable multi-stage games which generalizes the single-stage composition theorem for indifferentiable hash functions. We then show that

the CDA game (adaptive or non-adaptive) is unsplittable for \\emph{any} iterative hash function (thereby extending the preliminary results

by Ristenpart et al.). Finally, we prove that the \\emph{proof-of-storage} game presented by Ristenpart et al.~as a counterexample to

the general applicability of the indifferentiability framework is unsplittable for any multi-round iterative hash function, such as

Liskov\'s Zipper Hash (SAC~2006).

05:22 [Pub][ePrint]

This paper describes new algorithm for breaking McEliece cryptosystem, built on Reed-Muller binary code $RM(r, m)$, which receives the private key from the public key. The algorithm has complexity $O(n^d+n^4log_2n)$ bit operations, where $n=2^m, d=\\text{GCD}(r,m-1).$ In the case of $\\text{GCD}(r,m-1)$ limitation, attack has polynomial complexity. Practical results of implementation show that McEliece cryptosystems, based on the code with length $n=65536$ bits, can be broken in less than 7 hours on a personal computer.

05:22 [Pub][ePrint]

In this paper, security analysis of block ciphers with key length greater than block length is proposed. For a well-designed block cipher with key length k and block length n s.t. k>n and for all P, C, there are 2^{k-n} keys which map P to C. For given block cipher, if there is an efficient algorithm that can classify such keys, we propose an algorithm will be able to recover the secret key with complexity O(max{2^n, 2^{k-n}}). We apply this method on 2-round block cipher KASUMI.

05:22 [Pub][ePrint]

We present novel security requirements for second price auctions and a

simple, efficient and practical protocol that provably maintains these

requirements. Novel requirements are needed because commonly used requirements,

such as the indistinguishability-based secrecy requirement of encryption schemes

presented by \\cite{goldwasser1982pep}, do not fit properly in the second price

auctions context. Additionally, the presented protocol uses a trustworthy

supervisor that checks if the auctioneer deviated from the protocol and fines

him accordingly. By making sure the expected utility of the auctioneer when

deviating from the protocol is lower than his expected utility when abiding by

the protocol we ascertain that a {\\em rational} auctioneer will abide by the

protocol. This allows the supervisor to optimize by performing

(computationally-intensive) inspections of the auctioneer with only low

probability.

05:22 [Pub][ePrint]

We present and implement schemes for authenticating messages from a

group of users to a recipient, with revocable anonymity and massive (very high) message rate. Our implementations present a trade-off between the efficiency and the security required: from online group managers that participate in every message sent to offline managers, from assuming a trusted group manager and a trusted recipient to securing against both entities. All implementations have the {\\em traceablity} feature, allowing to distributively and efficiently trace

all messages that originated from a specific group member without violating anonymity of other members. In addition, our schemes are efficient and practical.

05:22 [Pub][ePrint]

Over the past decade bilinear maps have been used to build a large variety of cryptosystems. In parallel to new functionalities, we have also seen the emergence of many security assumptions. This leads to the general question of comparing two such assumptions. Boneh, Boyen and Goh introduced the Uber assumption as an attempt to offer a general framework for security assessment. Their idea is to propose a generic security assumption that can be specialized to suit the needs of any proof of protocols involving bilinear pairing. Even though the Uber assumption has been only stated in the bilinear setting, it can be easily restated to deal with ordinary Diffie-Hellman groups and assess other type of protocols.

In this article, we explore some particular cases of the Uber assumption; namely the n-CDH-assumption, the nth-CDH- assumption and the Q-CDH-assumption. We analyse the relationships between those cases and more precisely from a security point of view. Our analysis does not rely on any special property of the considered group(s) and does not use the generic group model.

05:22 [Pub][ePrint]

We put forward a message authentication code (MAC) for which we claim a high degree of resilience against a key-recovering attacker expoiting practical side channels. We achieve this by blending

the lessons learned from many years of engineering with the scientific

approach provided by leakage resilience. This highlights how the two often disparate fields can benefit from each other.

Our MAC is relatively simple and intuitive: we essentially base our construction on bilinear groups and secret share out our key. The shares are then refreshed before each time they are used and the algebraic properties of the bilinear pairing are used to compute the tag without the need to reconstruct the key.

This approach allows us to prove (in the random oracle model) existential unforgability of the MAC under chosen message attacks in the presence of (continuous) leakage, based on two novel assumptions:

a bilinear Diffie--Hellman variant and an assumption related to how leaky performing a group operation is.

In practice we envision our scheme would be implemented using pairings on some pairing friendly elliptic curve, where the leakiness of the group operation can be experimentally estimated. This allows us to argue about practical implementation aspects and security considerations of our scheme.

We compare our scheme against other leakage resilient MACs (or related schemes) that have appeared in the literature and conclude ours is both the most efficient and by far the most practical.

05:21 [Pub][ePrint]

Recent advances in lattice cryptography, mainly stemming from the

development of ring-based primitives such as ring-$\\lwe$, have made it

possible to design cryptographic schemes whose efficiency is

competitive with that of more traditional number-theoretic ones, along

with entirely new applications like fully homomorphic encryption.

Unfortunately, realizing the full potential of ring-based cryptography

has so far been hindered by a lack of practical algorithms and

analytical tools for working in this context. As a result, most

previous works have focused on very special classes of rings such as

power-of-two cyclotomics, which significantly restricts the possible

applications.

We bridge this gap by introducing a toolkit of fast, modular

algorithms and analytical techniques that can be used in a wide

variety of ring-based cryptographic applications, particularly those

built around ring-\\lwe. Our techniques yield applications that work

in \\emph{arbitrary} cyclotomic rings, with \\emph{no loss} in their

underlying worst-case hardness guarantees, and very little loss in

computational efficiency, relative to power-of-two cyclotomics. To

demonstrate the toolkit\'s applicability, we develop two illustrative

applications: a public-key cryptosystem and a somewhat homomorphic\'\'

symmetric encryption scheme. Both apply to arbitrary cyclotomics, have

tight parameters, and very efficient implementations.