International Association for Cryptologic Research

# IACR News Central

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

2015-04-20
03:17 [Pub][ePrint]

In Asiacrypt 2010, Knellwolf, Meier and Naya-Plasencia proposed

distinguishing attacks on Grain v1 when (i) Key Scheduling process is

reduced to 97 rounds using $2^{27}$ chosen IVs and (ii) Key Scheduling process is

reduced to 104 rounds using $2^{35}$ chosen IVs. Using similar idea, Banik

obtained a new distinguisher for 105 rounds.

In this paper, we show similar approach can work for 106 rounds. We present

a new distinguisher on Grain v1 for 106 rounds with success probability 63\\%.

03:17 [Pub][ePrint]

Recent breakthroughs in cryptography have positioned indistinguishability obfuscation as a central hub\'\' for almost all known cryptographic tasks, and as an extremely powerful building block for new cryptographic tasks resolving long-standing and foundational open problems. In this paper we prove the first negative results on the power of indistinguishability obfuscation and of the tightly related notion of functional encryption. Our results are as follows:

-- There is no fully black-box construction with a polynomial security loss of a collision-resistant function family from a general-purpose indistinguishability obfuscator.

-- There is no fully black-box construction with a polynomial security loss of a key-agreement protocol with perfect completeness from a general-purpose private-key functional encryption scheme.

-- There is no fully black-box construction with a polynomial security loss of an indistinguishability obfuscator for oracle-aided circuits from a private-key functional encryption scheme for oracle-aided circuits.

Specifically, we prove that any such potential construction must suffer from at least a sub-exponential security loss. Our results are obtained within a subtle framework capturing constructions that may rely on a wide variety of primitives in a non-black-box manner (e.g., obfuscating or generating a functional key for a function that uses the evaluation circuit of a puncturable pseudorandom function), and we only assume that the underlying indistinguishability obfuscator or functional encryption scheme themselves are used in a black-box manner.

03:17 [Pub][ePrint]

In this paper we present an identity-set-based broadcast encryption scheme with three working modes: positive membership (Select-mode), all member (All-mode), and negative membership (Cut-mode) over the user identity set, simultaneously.The core of our scheme is the implementation of cryptographic representation of subset by using two aggregation functions: Zeros-based aggregation and Poles-based aggregation. These two aggregation functions are capable of compressing any subset into one element in a bilinear map

group for determining the membership between an element and a subset. Our scheme achieves the optimal bound of O(1)-size for either ciphertext (consisting of just two elements) or decryption key (one element) for an identity set of large size. We prove that our scheme is secure under the

General Diffie-Hellman Exponent (GDHE) assumption.

03:17 [Pub][ePrint]

This paper presents new speed records for 128-bit secure elliptic-curve Diffie-Hellman key-exchange

software on three different popular microcontroller architectures. We consider a 255-bit curve proposed by Bernstein

known as Curve25519, which has also been adopted by the IETF. We optimize the X25519 key-exchange

protocol proposed by Bernstein in 2006 for AVR ATmega 8-bit microcontrollers, MSP430X 16-bit microcontrollers,

and for ARM Cortex-M0 32-bit microcontrollers. Our software for the AVR takes only 13 900 397 cycles

for the computation of a Diffe-Hellman shared secret, and is the first to perform this computation in less than

a second if clocked at 16 MHz for a security level of 128 bits. Our MSP430X software computes a shared secret

in 5 301 792 cycles on MSP430X microcontrollers that have a 32-bit hardware multiplier and in 7 933 296 cycles

on MSP430X microcontrollers that have a 16-bit multiplier. It thus outperforms previous constant-time ECDH

software at the 128-bit security level on the MSP430X by more than a factor of 1.2 and 1.15, respectively. Our

implementation on the Cortex-M0 runs in only 3 589 850 cycles and outperforms previous 128-bit secure ECDH

software by a factor of 3.

2015-04-19
21:17 [Pub][ePrint]

PAGES is a block cipher familiy basedon the design of Speck, see [1]. However, some intriguing design details of SPeck were not used in the design of PAGES. PAGES has a block size of 256 bit and comes in three versions:PAGES-512, PAGES-768 and PAGES-1024, where the number denotes the key length. The number of rounds is 64, 96, and 128, respectively. PAGES uses 128 bit variables, that is half of the block size.

21:17 [Pub][ePrint]

OAEP and other similar schemes proven secure in Random-Oracle Model require one or more hash functions with output size larger than those of standard hash functions. In this paper, we show that by utilizing popular Sponge constructions in OAEP framework, we can eliminate the need of such hash functions. We provide a new scheme in OAEP framework based on Sponge construction and call our scheme \\textit{Sponge based asymmetric encryption padding} (SpAEP). SpAEP is based on 2 functions: Sponge and SpongeWrap, and requires only standard output sizes proposed and standardized for Sponge functions. Our scheme is CCA2 secure for any trapdoor one-way permutation in the ideal permutation model for arbitrary length messages. Our scheme utilizes the versatile Sponge function to enhance the capability and efficiency of the OAEP framework. SpAEP with any trapdoor one-way permutation can also be used as a key encapsulation mechanism and a tag-based key encapsulation mechanism for hybrid encryption. Our scheme SpAEP utilizes the permutation model efficiently in the setting of public key encryption in a novel manner.

21:17 [Pub][ePrint]

In authenticated encryption schemes, there are two techniques for handling long ciphertexts while working within the constraints of a low buffer size: Releasing unverified plaintext (RUP) or Producing intermediate tags (PIT). In this paper, in addition to the two techniques, we propose another way to handle a long ciphertext with a low buffer size by storing and releasing only one (generally, or only few) intermediate state without releasing or storing any part of an unverified plaintext and without need of generating any intermediate tag.

In this paper we explain this generalized technique using our new construction sp-AELM. sp-AELM is a sponge based authenticated encryption scheme that provides support for limited memory devices. We also provide its security proof for privacy and authenticity in an ideal permutation model, using a code based game playing framework. Furthermore, we also present two more variants of sp-AELM that serve the same purpose and are more efficient than sp-AELM.

The ongoing CAESAR competition has 9 submissions which are based on the Sponge construction. We apply our generalized technique of storing single intermediate state to all these submissions, to determine their suitability with a Crypto module having limited memory. Our findings show that only ASCON and one of the PRIMATE\'s mode(namely GIBBON) satisifes the limited memory constraint using this technique, while the remaining 8 schemes (namely, Artemia, ICEPOLE, Ketje, Keyak, NORX, $\\pi$-Cipher, STRIBOB and two of the PRIMATEs mode: APE \\& HANUMAN) are not suitable for this scenario directly.

21:17 [Pub][ePrint]

Abstract: Broadcast or multicast is one of the most fundamental

concepts in data communication and distributed cryptography. A

central entity wishes to broadcast a secret data stream to a

dynamically changing privileged subset of recipients in such a

way that non-members of the privileged class cannot learn the

Algorithm (ASBA) based on threats analytics and case based

reasoning. It defines the security intelligence of an adaptively

secure broadcast comprehensively with a novel concept. It

recommends a set of intelligent model checking moves for the

verification of security intelligence of broadcasting mechanism.

The algorithm is analyzed from the perspectives of security

intelligence, communication complexity, computational

intelligence and efficiency of mechanism. The computational

intelligence is associated with the complexity of broadcast

scheduling, verification of security intelligence of broadcasting

system, key management strategies and payment function

computation. The cost of communication depends on number of

agents and subgroups in the broadcasting group and complexity of

data. The algorithm is applicable to the analysis of intelligent

mechanisms in static and dynamic networks, auction or

combinatorial auction for e-market, digital content distribution

21:17 [Pub][ePrint]

The problem of verifiable data streaming (VDS) considers a client with limited computational and storage capacities that streams an a-priori unknown number of elements to an untrusted server.

The client may retrieve and update any outsourced element. Other parties may verify each outsourced element\'s integrity using the client\'s public-key.

All previous VDS constructions incur a bandwidth and computational overhead on both client and server side, which is at least logarithmic in the number of transmitted elements.

We propose two novel, fundamentally different approaches to constructing VDS.

The first scheme is based on a new cryptographic primitive called Chameleon Vector Commitment (CVC).

A CVC is a trapdoor commitment scheme to a vector of messages where both commitments and openings have constant size.

Using CVCs we construct a tree-based VDS protocol that has constant computational and bandwidth overhead on the client side.

The second scheme shifts the workload to the server side by combining signature schemes with cryptographic accumulators.

Here, all computations are constant, except for queries, where the computational cost of the server is linear in the total number of updates.

21:17 [Pub][ePrint]

A family of hash functions is called correlation intractable\'\' if it is hard to find, given a random function in the family, an input-output pair that satisfies any sparse\'\' relation, namely any relation that is hard to satisfy for truly random functions. Correlation intractability captures a strong and natural Random Oracle-like property. However, it is widely considered to be unobtainable. Indeed, it was shown that correlation intractable functions do not exist for some length parameters [Canetti, Goldreich and Halevi, J.ACM 04]. Furthermore, no candidate constructions have been proposed in the literature for any setting of the parameters.

We construct a correlation intractable function ensemble that withstands all relations with a priori bounded polynomial complexity. We assume the existence of sub-exponentially secure indistinguishability obfuscators, puncturable pseudorandom functions, and input-hiding obfuscators for evasive circuits. The existence of the latter is implied by Virtual-Grey-Box obfuscation for evasive circuits [Bitansky et al, CRYPTO 14].

21:17 [Pub][ePrint]

Security models for two-party authenticated key exchange (AKE) protocols have developed over time to capture the security of AKE protocols even when the adversary learns certain secret values. Increased granularity of security can be modelled by considering partial leakage of secrets in the manner of models for leakage-resilient cryptography, designed to capture side-channel attacks. In this work, we use the strongest known partial-leakage-based security model for key exchange protocols, namely continuous after-the-fact leakage eCK (CAFL-eCK) model. We resolve an open problem by constructing the first concrete two-pass leakage-resilient key exchange protocol that is secure in the CAFL-eCK model.