International Association for Cryptologic Research

# IACR News Central

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

2012-08-22
00:17 [Pub][ePrint]

A $t$-round key alternating cipher can be viewed as an abstraction of AES. It defines a cipher $E$ from $t$ fixed public permutations $P_1, \\ldots, P_t : \\{0,1\\}^n \\ra \\{0,1\\}^n$ and a key $k = k_0\\Vert \\cdots \\Vert k_t \\in \\{0,1\\}^{n(t+1)}$ by setting $E_{k}(x) = k_t \\oplus P_t(k_{t-1} \\oplus P_{t-1}(\\cdots k_1 \\oplus P_1(k_0 \\oplus x) \\cdots))$. The indistinguishability of $E_k$ from a random truly random permutation by an adversary who also has oracle access to the (public) random permutations $P_1, \\ldots, P_t$ was investigated for $t = 2$ by Even and Mansour and, much later, by Bogdanov et al. The former proved indistinguishability up to $2^{n/2}$ queries for $t = 1$ while the latter proved indistinguishability up to $2^{2n/3}$ queries for $t \\geq 2$ (ignoring low-order terms). Our contribution is to improve the analysis of Bogdanov et al$.$ by showing security up to $2^{3n/4}$ queries for $t \\geq 3$. Given that security cannot exceed $2^{\\frac{t}{t+1}n}$ queries, this is in particular achieves a tight bound for the case $t = 3$, whereas, previously, tight bounds had only been achieved for $t = 1$ (by Even and Mansour) and for $t = 2$ (by Bogdanov et al$.$). Our main technique is an improved analysis of the elegant \\emph{sample distinguishability} game introduced by Bogdanov et al. More specifically, we succeed in eliminating adaptivity by considering the Hellinger advantage of an adversary, a notion that we introduce here. To our knowledge, our result constitutes the first time Hellinger distance (a standard measure of distance\'\' between random variables, and a cousin of statistical distance) is used in a cryptographic indistinguishability proof.

00:17 [Pub][ePrint]

In this paper there are considered several approaches for the increasing performance of software implementation of integer multiplication algorithm for the 32-bit & 64-bit platforms via parallelization. The main idea of algorithm parallelization consists in delayed carry mechanism using which authors have proposed earlier [11]. The delayed carry allows to get rid of connectivity in loop iterations for sums accumulation of products, which allows parallel execution of loops iterations in separate threads. Upon completion of sum accumulation threads, it is necessary to make corrections in final result via assimilation of carries. First approach consists in optimization of parallelization for the two execution threads and second approach is an evolution of the first approach and is oriented on three and more execution threads. Proposed approaches for parallelization allow increasing the total algorithm computational complexity, as for one execution thread, but decrease total execution time on multi-core CPU.

00:17 [Pub][ePrint]

Confidentiality and authenticity are two fundamental security requirement of Public key Cryptography. These are achieved by encryption scheme and digital signatures respectively. Here we present a provably secure signcryption scheme in random oracle

model by modifying Libert et al\'s scheme [2]. Our scheme is more e±cient and secure than Libert et al\'s scheme. Tan [1] proved that this scheme is not secure against non-adaptive chosen cipher text attacks. It has been also proved that the semantically secure

symmetric encryption scheme proposed in the Libert et al\'s scheme is not su±cient to guarantee to be secure against adaptive chosen ciphertext attacks. Here we proposed a modified version of Libert et al\'s scheme. The security of which is proven using two as-

sumptions, namely the Strong Diffie-Hellman (SDH) and Diffie-Hellman Inversion (DHI)

in the random oracle model.

2012-08-20
00:17 [Pub][JoC]

Abstract  We devise a notion of polynomial runtime suitable for the simulation-based security analysis of multi-party cryptographic protocols. Somewhat surprisingly, straightforward notions of polynomial runtime lack expressivity for reactive tasks and/or lead to an unnatural simulation-based security notion. Indeed, the problem has been recognized in previous works, and several notions of polynomial runtime have already been proposed. However, our new notion, dubbed reactive polynomial time, is the first to combine the following properties: –  it is simple enough to support simple security/runtime analyses, –  it is intuitive in the sense that all intuitively feasible protocols and attacks (and only those) are considered polynomial-time, –  it supports secure composition of protocols in the sense of a universal composition theorem. We work in the Universal Composability (UC) protocol framework. We remark that while the UC framework already features a universal composition theorem, we develop new techniques to prove secure composition in the case of reactively polynomial-time protocols and attacks.

• Content Type Journal Article
• Pages 1-67
• DOI 10.1007/s00145-012-9127-4
• Authors

• Dennis Hofheinz, Karlsruhe Institute of Technology, Karlsruhe, Germany
• Dominique Unruh, University of Tartu, Tartu, Estonia
• Jörn Müller-Quade, Karlsruhe Institute of Technology, Karlsruhe, Germany

• Journal Journal of Cryptology
• Online ISSN 1432-1378
• Print ISSN 0933-2790

From: Thu, 16 Aug 2012 16:03:17 GMT

2012-08-19
17:48 [Conf][Crypto]

CRYPTO 2012 starts today!!

If you have any conference-related stories or photos to share, please email crypto2012@iacr.org.

2012-08-18
06:17 [Pub][ePrint]

Optimizing the maximum, or average, length of the shares in relation to the length of the secret for every given access structure is a difficult and long-standing open problem in cryptology. Most of the known lower bounds on these parameters have been obtained by implicitly or explicitly using that every secret sharing scheme defines a polymatroid related to the access structure. The best bounds that can be obtained by this combinatorial method can be determined by using linear programming, and this can be effectively done for access structures on a small number of participants.

By applying this linear programming approach, we improve some of the known lower bounds for the access structures on five participants and the graph access structures on six participants for which these parameters were still undetermined. Nevertheless, the lower bounds that are obtained by this combinatorial method are not tight in general. For some access structures, they can be improved by adding

to the linear program non-Shannon information inequalities as new constraints. We obtain in this way new separation results for some graph access structures on eight participants and for some ports of non-representable matroids. Finally, we prove that, for two access structures on five participants, the combinatorial lower bound cannot be attained by any linear secret sharing scheme.

06:17 [Pub][ePrint]

RFID-based tag matching allows a reader Rk to determine whether two tags Ti and Tj store

some attributes that jointly fulfill a boolean constraint. The challenge in designing a matching mechanism

is tag privacy. While cheap tags are unable to perform any computation, matching has to be

achieved without revealing the tags\' attributes. In this paper, we present T-MATCH, a protocol for secure

and privacy preserving RFID tag matching. T-MATCH involves a pair of tags Ti and Tj , a reader

Rk, and a backend server S. To ensure tag privacy against Rk and S, T-MATCH employs a new technique

based on secure two-party computation that prevents Rk and S from disclosing tag attributes. For

tag privacy against eavesdroppers, each tag Ti in T-MATCH stores an IND-CPA encryption of its attribute.

Such an encryption allows Rk to update the state of Ti by merely re-encrypting Ti\'s ciphertext.

T-MATCH targets cheap tags that cannot perform any computation, but are only required to store 150

bytes.

06:17 [Pub][ePrint]

We investigate how information leakage reduces computational entropy of a random variable X. Recall that HILL and metric computational entropy are parameterized by quality (how distinguishable is X from a variable Z that has true entropy) and quantity (how much true entropy is there in Z).

We prove an intuitively natural result: conditioning on an event of probability p reduces the quality of metric entropy by a factor of p and the quantity of metric entropy by log 1/p note that this means that the reduction in quantity and quality is the same, because the quantity of entropy is measured on logarithmic scale). Our result improves previous bounds of Dziembowski and Pietrzak (FOCS 2008), where the loss in the \\emph{quantity} of entropy was related to its original quality. The use of metric entropy simplifies the analogous the result of Reingold et. al. (FOCS 2008) for HILL entropy.

Further, we simplify dealing with information leakage by investigating conditional metric entropy. We show that, conditioned on leakage of \\lambda bits, metric entropy gets reduced by a factor 2^\\lambda in quality and \\lambda in quantity.

06:17 [Pub][ePrint]

We get two kinds of new results on nonexistence of generalized bent function. The first one is Based on Feng\'s results by using Schmidt\'s field descent method. For the second kind, considering special property of the field $\\mathbb{Q}(\\zeta_{23^e})$, We get new nonexistence results of generalized bent functions with type $[3,2\\cdot23^e]$.

06:17 [Pub][ePrint]

Functional encryption is an emerging paradigm for public-key encryption that enables fine-grained control of access to encrypted data. In this work, we present new perspectives on security definitions for functional encryption, as well as new lower bounds on what can be achieved. Our main contributions are as follows:

* We show a lower bound for functional encryption that satisfies a weak (non-adaptive) simulation-based security notion, via pseudo-random functions. This is the first lower bound that exploits unbounded collusions in an essential way.

* We put forth and discuss a simulation-based notion of security for functional encryption, with an unbounded simulator (called USIM). We show that this notion interpolates indistinguishability and simulation-based security notions, and has strong correlations to results and barriers in the zero-knowledge and multi-party computation literature.

06:17 [Pub][ePrint]

This paper presents a new security notion, called \\emph{perfect keyword privacy (PKP)}, for non-interactive public-key encryption with keyword search (PEKS) \\cite{bcop04}. Although the conventional security notion for PEKS guarantees that a searchable ciphertext leaks no information about keywords, it gives no guarantee concerning leakage of a keyword from the trapdoor. PKP is a notion for overcoming this fatal deficiency. Since the trapdoor has verification functionality, the popular concept of indistinguishability\'\' is inadequate for capturing the notion of keyword privacy from the trapdoor. Hence, our formalization of PKP depends on the idea of formalizing a perfectly one-way hash function \\cite{can97,cmr98}. We also present \\emph{IND-PKP security} as a useful notion for showing that a given PEKS scheme has PKP. Furthermore, we present PKP+ and IND-PKP+ as enhanced notions of PKP and IND-PKP, respectively. Finally, we present several instances of an IND-PKP or IND-PKP+ secure PEKS scheme, in either the random oracle model or the standard model.