International Association for Cryptologic Research

# IACR News Central

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

2014-05-30
15:17 [Pub][ePrint]

Motivated by theoretical and practical interest, the challenging task of designing crypto- graphic protocols having only black-box access to primitives has generated various breakthroughs in the last decade. Despite such positive results, even though nowadays we know black-box constructions for secure two-party and multi-party computation even in constant rounds, there still are in Cryptography several constructions that critically require non-black-box use of primitives in order to securely realize some fundamental tasks. As such, the study of the gap between black-box and non-black-box constructions still includes major open questions.

In this work we make progress towards filling the above gap. We consider the case of black- box constructions for computations requiring that even the size of the input of a player remains hidden. We show how to commit to a string of arbitrary size and to prove statements over the bits of the string. Both the commitment and the proof are succinct, hide the input size and use standard primitives in a black-box way. We achieve such a result by giving a black-box construction of an extendable Merkle tree that relies on a novel use of the \"MPC in the head\" paradigm of Ishai et al. [STOC 2007].

We show the power of our new techniques by giving the first black-box constant-round public-coin zero knowledge argument for NP.

To achieve this result we use the non-black-box simulation technique introduced by Barak [FOCS 2001], the PCP of Proximity introduced by Ben-Sasson et al. [STOC 2004], together with a black-box public-coin witness indistinguishable universal argument that we construct along the way.

Additionally we show the first black-box construction of a generalization of zero-knowledge sets introduced by Micali et al. [FOCS 2003]. The generalization that we propose is a strengthening that requires both the size of the set and the size of the elements of the set to remain private.

15:17 [Pub][ePrint]

Big data and its applications are attracting more and more research interests in recent years. As the new generation distributed computing platform, cloud computing is believed to be the most potent platform. With the data no longer under users\' direct control, data security in cloud computing is becoming one of the most obstacles of the proliferation of cloud. In order to improve service reliability and availability, storing multiple replicas along with original datasets is a common strategy for cloud service providers. Public data auditing schemes allow users to verify their outsourced data storage without having to retrieve the whole dataset. However, existing data auditing techniques suffers from efficiency and security problems. First, for dynamic datasets with multiple replicas, the communication overhead for update verification is very large, because verification for each update requires O(logn) communication complexity and update of all replicas. Second, to the best of our knowledge, there is no existing integrity verification schemes can provide public auditing and authentication of block indices at the same time. Without authentication of block indices, the server can build a valid proof based on data blocks other than the block client requested to verify. In order to address these problems, in this paper, we present a novel public auditing scheme named MuR-DPA. The new scheme incorporated a novel authenticated data structure based on the Merkle hash tree, which we name as MR-MHT. For support of full dynamic data updates, authentication of block indices and efficient verification of updates for multiple replicas at the same time, the level values of nodes in MR-MHT are generated in a top-down order, and all replica blocks for each data block are organized into a same replica sub-tree. Compared to existing integrity verification and public auditing schemes, theoretical analysis and experimental results show that the MuR-DPA scheme can not only incur much less communication overhead for both update and verification of datasets with multiple replicas, but also provide enhanced security against dishonest cloud service providers.

15:17 [Pub][ePrint]

We revisit the randomized iterate\'\' technique that was originally used by Goldreich, Krawczyk, and Luby (SICOMP 1993) and refined by Haitner, Harnik and Reingold (CRYPTO 2006) in constructing pseudorandom generators (PRGs) from regular one-way functions (OWFs). We abstract out a technical lemma with connections to several recent work on cryptography with imperfect randomness, which provides an arguably simpler and more modular proof for the Haitner-Harnik-Reingold PRGs from regular OWFs.

We extend the approach to a more general construction of PRGs with seed length $O(n{\\log}n)$ from a broader class of OWFs. More specifically, consider an arbitrary one-way function $f$ whose range is divided into sets $\\Y_1$, $\\Y_2$, $\\ldots$, $\\Y_n$ where each $\\Y_i\\eqdef\\{y:2^{i-1}\\le|f^{-1}(y)| 15:17 [Pub][ePrint] A universal one-way hash function (UOWHF) is a compressing function for which finding a second preimage is infeasible. The seminal work of Rompel (STOC 1990) that one-way functions (OWFs) imply UOWHFs is one of the most important founding results of modern cryptography. The current best known UOWHF construction from any one-way function(on$n$-bit input) by Haitner et al. (Eurocrypt 2010) requires output and key length$\\tilO(n^7)$, which is far from practical. On the other hand, special structured OWFs typically give rise to much more efficient (and almost practical) UOWHFs. Naor and Yung (STOC 1989) gave an optimal construction of UOWHFs of key and output lengths both linear in$n$by making a single call to any one-way permutation. De Santis and Yung (Eurocrypt 1990), Barhum and Maurer (Latincrypt 2012), and Ames, Gennaro, and Venkitasubramaniam (Asiacrypt 2012) further extended the work to more generalized settings, namely, 1-to-1 and regular one-way functions. However, the best known constructions still require key length$O(n\\cdot\\log{n})$even for 1-to-1 one-way functions, and need to make$O(\\omega(1 {\\cdot}\\log{n})$calls to any known regular one-way functions, or even$\\tilO(n)$adaptive calls if one wants linear output length at the same time. In this paper, we first introduce a technical lemma about universal hashing with nice symmetry to the leftover hash lemma, which might be of independent interest. That is, if one applies universal hash function$h:\\bit{n}\\rightarrow\\bit{a+d}$to any random variable$X$of min-entropy$a$, then$h$will be 1-to-1 on$X$except for a$2^{-d}$fraction. We also generalize the construction of Naor and Yung (that was optimal only for one-way permutations) to 1-to-1 and almost regular one-way functions, and significantly extend their analysis. The above yields the following results. \\begin{itemize} \\item For any 1-to-1 one-way function, we give an optimal construction of UOWHFs with key and output length$\\Theta(n)$by making a single call to the underlying OWF. \\item For any known-(almost-)regular one-way function with known hardness, we give another optimal construction of UOWHFs with key and output length$\\Theta(n)$and a single call to the one-way function. \\item For any known-(almost-)regular one-way function, we give a construction of UOWHFs with key and output length$O(\\omega(1){\\cdot}n)$and by making$\\omega(1)$non-adaptive calls to the one-way function. \\end{itemize} where the first two constructions enjoy optimal parameters simultaneously and the third one is nearly optimal up to any(efficiently computable) super-constant factor$\\omega(1)$, e.g.,$\\log\\log\\log{n}$or even less. Furthermore, the constructions enjoy optimal shrinkages by matching the upper bound of Gennaro et al. (SICOMP 2005). 15:17 [Pub][ePrint] Traditional cryptographic hash functions allow one to easily check whether the original plain-texts are equal or not, given a pair of hash values. Probabilistic hash functions extend this concept where given a probabilistic hash of a value and the value itself, one can efficiently check whether the hash corresponds to the given value. However, given distinct probabilistic hashes of the same value it is not possible to check whether they correspond to the same value. In this work we introduce a new cryptographic primitive called \\emph{relational hash} using which, given a pair of (relational) hash values, one can determine whether the original plain-texts were related or not. We formalize various natural security notions for the relational hash primitive - one-wayness, unforgeability and oracle simulatibility. We develop a relational hash scheme for discovering linear relations among bit-vectors (elements of$\\FF_2^n$) and$\\FF_p$-vectors. Using these linear relational hash schemes we develop relational hashes for detecting proximity in terms of hamming distance. These proximity relational hashing scheme can be adapted to a privacy preserving biometric authentication scheme. We also introduce the notion of \\emph{relational encryption}, which is a regular semantically secure public key encryption for any adversary which only has access to the public key. However, a semi-trusted entity can be given a relational key using which it can discover relations among ciphertexts, but still cannot decrypt and recover the plaintexts. 15:17 [Pub][ePrint] Proofs of storage (POR or PDP) is a cryptographic tool, which enables data owner or third party auditor to audit integrity of data stored remotely in a cloud storage server, without keeping a local copy of data or downloading data back during auditing. We observe that all existing publicly verifiable POS schemes suffer from a serious drawback: It is extremely slow to compute authentication tags for all data blocks, due to many expensive group exponentiation operations. Surprisingly, it is even much slower than typical network uploading speed, and becomes the bottleneck of the setup phase of the POS scheme. We propose a new variant formulation called \"Delegatable Proofs of Storage\". In this new relaxed formulation, we are able to construct POS schemes, which on one side is as efficient as private key POS schemes, and on the other side can support third party auditor and can switch auditors at any time, close to the functionalities of publicly verifiable POS schemes. Compared to traditional publicly verifiable POS schemes, we speed up the tag generation process by at least several hundred times, without sacrificing efficiency in any other aspect. Like many existing schemes, we can also speed up our tag generation process by N times using N CPU cores in parallel. We prove that our scheme is sound under Bilinear Strong Diffie-Hellman Assumption, and it is privacy preserving against auditor under Discrete Log Assumption. Both proofs are given in standard model. 15:17 [Pub][ePrint] Several recent short NIZK arguments are constructed in a modular way from a small number of basic arguments like the product argument or the shift argument. The main technical novelty of the current work is a significantly more efficient version of the product argument. Based on this, we propose an adaptive NIZK range argument with almost optimal complexity: constant communication (in group elements), constant verifier\'s computational complexity (in cryptographic operations), and$\\Theta (n \\log n)$[resp.,$\\Theta (n)$] prover\'s computational complexity (in non-cryptographic [resp., cryptographic] operations). The latter can be compared to$n \\log^{\\omega (1)} n$in the most efficient \\emph{published} short adaptive non-interactive range argument, or$\\Theta (n \\log^2 n)$[resp.,$\\Theta (n \\log n)$] that is achievable when following QAP-based framework from Eurocrypt 2013. Here,$n$is the logarithm of the range length. The new product argument can be used to construct efficient adaptive NIZK arguments for many other languages, including several that are$\\mathsf{NP}$-complete like$\\textsc{SubsetSum}$. Importantly, for all such languages, new adaptive arguments achieve better prover\'s computation than the QAP-based framework. 15:17 [Pub][ePrint] We show how the cofactorization step, a compute-intensive part of the relation collection phase of the number field sieve (NFS), can be farmed out to a graphics processing unit. Our implementation on a GTX 580 GPU, which is integrated with a state-of-the-art NFS implementation, can serve as a cryptanalytic co-processor for several Intel i7-3770K quad-core CPUs simultaneously. This allows those processors to focus on the memory-intensive sieving and results in more useful NFS-relations found in less time. 06:23 [Job][New] Department of Computing at University of Surrey is currently seeking to make an appointment at Lecturer (equiv. to Assistant Professor) or Senior Lecturer (equiv. to Associate Professor) level in Cyber Security to support the Department’s continued growth by complementing our existing research strengths and contributing to the research leadership within the Department, to play a leading role in the recently established interdisciplinary Surrey Centre for Cyber Security (SCCS) and to contribute to the new MSc Information Security programme. Applications are welcome particularly in the areas of Software Security, Web and Network Security, System Security, Applied Cryptography and Protocols, Privacy and Data Protection, Multimedia Security, Digital Forensics, and Human-Centred Security. The Department is research-led with around 70 RAs and PhD students, and is attracting growing research support from funding bodies such as the UK Research Councils, UK Technology Strategy Board (TSB), the EU-IST, and also private foundations e.g. The Leverhulme Trust. Major IT, telecommunication, defence and security organisations are sponsoring research in the Department. Applicants at the Lecturer level should have a relevant PhD, a developing track record in publication with demonstrable high potential in high-quality research and teaching. Applicants at the Senior Lecturer level will have an international research profile, a significant track record of high-quality publications in leading journals and conference proceedings, and experience in a leadership or development role in high quality teaching. A record in attracting research funding would be an advantage. Closing date for applications is 30 June 2014. The post is to start in September 2014 or as soon as possible thereafter. 2014-05-28 18:17 [Pub][ePrint] The GOST hash function family has served as the new Russian national hash standard (GOST R 34.11-2012) since January 1, 2013, and it has two members,$i.e.$, GOST-256 and GOST-512 which correspond to two different output lengths. Most of the previous analyses of GOST emphasize on the compression function rather than the hash function. In this paper, we focus on security properties of GOST under the hash function setting. First we give two improved preimage attacks on 6-round GOST-512 compared with the previous preimage attack,$i.e.\$, a time-reduced attack with the same memory requirements and a memoryless attack with almost identical time. Then we improve the best collision attack on reduced GOST-256 (resp. GOST-512) from 5 rounds to 6.5 (resp. 7.5) rounds. Finally, we construct a limited-birthday distinguisher on 9.5-round GOST using the limited-birthday distinguisher on hash functions proposed at ASIACRYPT 2013. An essential technique used in our distinguisher is the carefully chosen differential trail, which can further exploit freedom degrees in the inbound phase when launching rebound attacks on the GOST compression function. This technique helps us to reduce the time complexity of the distinguisher significantly. We apply this strategy to Whirlpool, an ISO standardized hash function, as well. As a result, we construct a limited-birthday distinguisher on 9-round Whirlpool out of 10 rounds, and reduce the time complexity of the previous 7-round distinguisher. To the best of our knowledge, all of our results are the best cryptanalytic results on GOST and Whirlpool in terms of the number of rounds analyzed under the hash function setting.

18:17 [Pub][ePrint]

This paper presents three curious findings about deterministic public-key encryption (D-PKE) that further our understanding of its security, in particular because of the contrast with standard, randomized public-key encryption (R-PKE):

(1) It would appear to be a triviality, for any primitive, that security in the standard model implies security in the random-oracle model, and it is certainly true, and easily proven, for R-PKE. For D-PKE it is not clear and depends on details of the definition. In particular we can show it in the non-uniform case but not in the uniform case.

(2) The power of selective-opening attacks (SOA) comes from an adversary\'s ability, upon corrupting a sender, to learn not just the message but also the coins used for encryption. For R-PKE, security is achievable. For D-PKE, where there are no coins, one\'s first impression may be that SOAs are vacuous and security should be easily achievable. We show instead that SOA-security is impossible, meaning no D-PKE scheme can achieve it.

(3) For R-PKE, single-user security implies multi-user security, but we show that there are D-PKE schemes secure for a single user and insecure with two users.