International Association for Cryptologic Research

# IACR News Central

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

2014-05-01
18:17 [Pub][ePrint]

We investigate structure-preserving signatures in asymmetric bilinear groups with an efficiently computable homomorphism from one source group to the other, i.e., the Type II setting. It has been shown that in the Type I and Type III settings (with maximal symmetry and maximal asymmetry respectively), structure-preserving signatures need at least 2 verification equations and 3 group elements. It is therefore natural to conjecture that this would also be required in the intermediate Type II setting, but surprisingly this turns out not to be the case. We construct structure-preserving signatures in the Type II setting that only require a single verification equation and consist of only 2 group elements. This shows that the Type II setting with partial asymmetry is different from the other two settings in a way that permits the construction of cryptographic schemes with unique properties.

We also investigate lower bounds on the size of the public verification key in the Type II setting. Previous work in structure-preserving signatures has explored lower bounds on the number of verification equations and the number of group elements in a signature but the size of the verification key has not been investigated before. We show that in the Type II setting it is necessary to have at least 2 group elements in the public verification key in a signature scheme with a single verification equation.

Our constructions match the lower bounds so they are optimal with respect to verification complexity, signature sizes and verification key sizes. In fact, in terms of verification complexity, they are the most efficient structure preserving signature schemes to date. Depending on the context in which a scheme is deployed it is sometimes desirable to have strong existential unforgeability, and in other cases full randomizability. We give two structure-preserving signature schemes with a single verification equation where both the signatures and the public verification keys consist of two group elements each. One signature scheme is strongly existentially unforgeable, the other is fully randomizable. Having such simple and elegant structure-preserving signatures may make the Type II setting the easiest to use when designing new structure-preserving cryptographic schemes, and lead to schemes with the greatest conceptual simplicity.

03:17 [Pub][ePrint]

Identity-based non-interactive key exchange (IB-NIKE) is a powerful but a bit overlooked primitive in identity-based cryptography. While identity-based encryption and signature have been extensively investigated over the past three decades, IB-NIKE has remained largely unstudied. Currently, there are only few IB-NIKE schemes in the literature. Among them, Sakai-Ohgishi-Kasahara (SOK) scheme is the first efficient and secure IB-NIKE scheme, which has great influence on follow-up works. However, the SOK scheme required its identity mapping function to be modeled as a random oracle to prove security. Moreover, the existing security proof heavily relies on the ability of programming the random oracle. It is unknown whether such reliance is inherent.

In this work, we intensively revisit the SOK IB-NIKE scheme, and present a series of possible and impossible results in the random oracle model and the standard model. In the random oracle model, we first improve previous security analysis for the SOK IB-NIKE scheme by giving a tighter reduction. We then use meta-reduction technique to show that the SOK scheme is unlikely proven to be secure based on the computational bilinear Diffie-Hellman (CBDH) assumption without programming the random oracle. In the standard model, we show how to instantiate the random oracle in the SOK scheme with a concrete hash function from admissible hash functions (AHFs) and indistinguishability obfuscation.

The resulting scheme is fully adaptive-secure based on the decisional bilinear Diffie-Hellman inversion (DBDHI) assumption. To the best of our knowledge, this is first fully adaptive-secure IB-NIKE scheme in the standard model that does not explicitly require multilinear maps. Previous schemes in the standard model either have merely selective security or use multilinear maps as a key ingredient. Of particular interest, we generalize the definition of AHFs, and propose a generic construction which enables AHFs with previously unachieved parameters.

03:17 [Pub][ePrint]

Boneh-Boyen signatures are widely used in many advanced cryptosystems. It has a structure of inversion in the exponent\", and its unforgeability against $q$ chosen-messages attack is proven under the non-static $q$-Strong Diffie-Hellman assumption. It has been an open problem whether the exponent-inversion signature, and its various applications, can be proved based on a weaker static assumption.

We propose a dual-form Boneh-Boyen signature and demonstrate how to prove the security for the exponent-inversion signature structure in the standard model under static assumptions. We apply our proof technique to a number of related cryptosystems employing similar structure, including anonymous credentials, identity-based encryption (IBE) and accountable authority IBE. Our results give the first exponent-inversion IBE in the standard model under static assumption. Our anonymous credentials and accountable authority IBE are also better than existing schemes in terms of both security and efficiency.

00:17 [Pub][ePrint]

We revisit the question of constructing secure general-purpose indistinguishability obfusca- tion (iO), with a security reduction based on explicit computational assumptions. Previous to our work, such reductions were only known to exist based on instance-dependent assumptions and/or ad-hoc assumptions: In the original constructive work of Garg et al. (FOCS 2013), the underlying explicit computational assumption encapsulated an exponential family of assump- tions for each pair of circuits to be obfuscated. In the more recent work of Pass et al. (ePrint 2013), the underlying assumption is a meta-assumption that also encapsulates an exponential family of assumptions, and this meta-assumption is invoked in a manner that captures the spe- cific pair of circuits to be obfuscated. The assumptions underlying both these works substantially capture (either explicitly or implicitly) the actual structure of the obfuscation mechanism itself.

In our work, we provide the first construction of general-purpose indistinguishability obfus- cation proven secure via a reduction to an instance-independent computational assumption over multilinear maps, namely, the Multilinear Subgroup Elimination Assumption. Our assumption does not depend on the circuits to be obfuscated (except for its size), and does not correspond to the underlying structure of our obfuscator. The technical heart of our paper is our reduction, which gives a new way to argue about the security of indistinguishability obfuscation.

2014-04-30
21:17 [Pub][ePrint]

In this work, we present practical semi-free-start collisions for SHA-512 on up to 38 (out of 80) steps with complexity $2^{40.5}$. The best previously published result was on 24 steps. The attack is based on extending local collisions as proposed by Mendel et al. in their Eurocrypt 2013 attack on SHA-256. However, for SHA-512, the search space is too large for direct application of these techniques. We achieve our result by improving the branching heuristic of the guess-and-determine approach to find differential characteristics and

conforming message pairs. Experiments show that for smaller problems like 27 steps of SHA-512, the heuristic can also speed up the

collision search by a factor of $2^{20}$.

21:17 [Pub][ePrint]

In 2014, Xu et al. proposed a two-factor mutual authentication and

key agreement scheme for telecare medicine information system (TIMS)

based on elliptic curve cryptography (ECC). However, it has been

shown that Xu et al.\'s scheme is not suitable for practical use as

it is many problems. As a remedy, an improved scheme is proposed

with better security and functionality attributes.

21:17 [Pub][ePrint]

Recently, several efforts to implement and use an unconditionally secure multi-party computation (MPC) scheme have been put into practice. These implementations are {\\em passively} secure MPC schemes in which an adversary must follow the MPC schemes. Although passively secure MPC schemes are efficient, passive security has the strong restriction concerning the behavior of the adversary. We investigate how secure we can construct MPC schemes while maintaining comparable efficiency with the passive case, and propose a construction of an {\\em actively} secure MPC scheme from passively secure ones. Our construction is secure in the $t < n/2$ setting, which is the same as the passively secure one. Our construction operates not only the theoretical minimal set for computing arbitrary circuits, that is, addition and multiplication, but also high-level operations such as shuffling and sorting. We do not use the broadcast channel in the construction. Therefore, privacy and correctness are achieved but {\\em robustness} is absent; if the adversary cheats, a protocol may not be finished but anyone can detect the cheat (and may stop the protocol) without leaking secret information. Instead of this, our construction requires $O((c_B n + n^2)\\kappa)$ communication that is comparable to one of the best known passively secure MPC schemes, $O((c_M n + n^2)\\log n)$, where $\\kappa$ denote the security parameter, $c_B$ denotes the sum of multiplication gates and high-level operations, and $c_M$ denotes the number of multiplication gates. Furthermore, we implemented our construction and confirmed that its efficiency is comparable to the current astest passively secure implementation.

21:17 [Pub][ePrint]

In this article, we describe a novel collision attack for up to 5 rounds of the Grøstl hash function. This significantly improves upon the best previously published results on 3 rounds. By using a new type of differential trail spanning over more than one message block we are able to construct collisions for Grøstl on 4 and 5 rounds with complexity of $2^{67}$ and $2^{120}$, respectively. Both attacks need $2^{64}$ memory. Due to the generic nature of our attack we can even construct meaningful collisions in the chosen-prefix setting with the same attack complexity.

21:17 [Pub][ePrint]

We put forth the notion of publicly evaluable pseudorandom functions (PEPRFs),

which can be viewed as a non-trivial extension of the standard pseudorandom functions (PRFs). Briefly, PEPRFs are defined over domain $X$ where there exists an average-case hard NP language $L$, and each secret key $sk$ is associated with a public key $pk$. For any $x \\in L$, in addition to evaluate $\\mathsf{F}_{sk}(x)$ using $sk$ as in the standard PRFs, one is also able to evaluate $\\mathsf{F}_{sk}(x)$ with $pk$, $x$ and a witness $w$ for $x \\in L$. We consider two security notions for PEPRFs. The basic one is weak pseudorandomness which stipulates PEPRF can not be distinguished from a uniform random function only at randomly chosen inputs. The strengthened one is adaptive weak pseudorandomness

We conduct a formal study of PEPRFs, focusing on applications, constructions, and extensions.

We show how to construct chosen-plaintext secure (CPA) and chosen-ciphertext secure (CCA) public-key encryption (PKE) from (adaptive) PEPRFs. The construction is simple, black-box, and admits a direct proof of security. We provide evidence that (adaptive) PEPRFs exist by showing the constructions from both hash proof system and extractable hash proof system.

We introduce the notion of publicly samplable PRFs (PSPRFs), which is a relaxation of PEPRFs, but nonetheless imply PKE. We show (adaptive) PSPRFs are implied by (adaptive) trapdoor relations, yet the latter are further implied by (adaptive) trapdoor functions. This helps us to unify and clarify many PKE schemes from different paradigms and general assumptions under the notion of PSPRFs. We also view adaptive PSPRFs as a candidate of the weakest general assumption for CCA-secure PKE.

We explore similar extension on recently emerging predicate PRFs, putting forth the notion of publicly evaluable predicate PRFs, which, as an immediate application, imply predicate encryption.

We propose a variant of PEPRFs, which we call publicly evaluable and verifiable functions (PEVFs). Compared to PEPRFs, PEVFs have an addition promising property named public verifiability at the cost of the best possible security inherently degrades to hard to compute on average. We justify the applicability of PEVFs by presenting a simple construction of hash-and-sign\'\' signatures, both in the random oracle model and standard model.

21:17 [Pub][ePrint]

A sound design time evaluation of the security of a digital device is

a goal which has attracted a great amount of research effort lately.

Common security metrics for the attack consider either the theoretical leakage of the device, or assume as a security metric the

number of measurements needed in order to be able to always recover the secret key. In this work we provide a combined security

metric taking into account the computational effort needed to lead

the attack, in combination with the quantity of measurements to

be performed, and provide a practical lower bound for the security

margin which can be employed by a secure hardware designer. This

paper represents a first exploration of a design-time security metric

incorporating the computational effort required to lead a power-

based side channel attack in the security level assessment of the

device. We take into account in our metric the possible presence of

masking and hiding schemes, and we assume the best measurement

conditions for the attacker, thus leading to a conservative estimate

of the security of the device. We provide a practical validation of

our security metric through an analysis of transistor-level accurate

power simulations of a 128-bit AES core implemented on a 65 nm

library.

21:17 [Pub][ePrint]

This paper proves a lower bound on the trade-off between server storage size and the locality of memory accesses in searchable symmetric encryption (SSE). Namely, when encrypting an index of $N$ identifier/keyword pairs, the encrypted index must have size $\\omega(N)$ \\emph{or} the scheme must perform searching with $\\omega(1)$ non-contiguous reads to memory \\emph{or} the scheme must read many more bits than is necessary to compute the results. Recent implementations have shown that non-locality of server memory accesses create a throughput-bottleneck on very large databases. Our lower bound shows that this is due to the security notion and not a defect of the constructions. An upper bound is also given in the form of a new SSE construction with an $O(N\\log N)$ size encrypted index that performs $O(\\log N)$ reads during a search.