International Association for Cryptologic Research

# IACR News Central

Get an update on changes of the IACR web-page here. For questions, contact newsletter (at) iacr.org. You can also receive updates via:

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

2015-01-07
10:17 [Pub][ePrint]

Password Hashing, a technique commonly implemented by a server to protect passwords of clients, by performing a one-way transformation on the password, turning it into another string called the hashed password. In this paper, we introduce a secure password hashing framework Rig which is based on secure cryptographic hash functions. It provides the flexibility to choose different functions for different phases of the construction. The design of the scheme is very simple to implement in software and is flexible as the memory parameter is independent of time parameter (no actual time and memory trade-o) and is strictly sequential (difficult to parallelize) with comparatively huge memory consumption that provides strong resistance against attackers using multiple processing units. It supports client-independent updates, i.e., the server can increase the security parameters by updating the existing password hashes without knowing the password. Rig can also support the server relief protocol where the client bears the maximum effort to compute the password hash, while there is minimal effort at the server side. We analyze Rig and show that our proposal provides an exponential time complexity against the low-memory attack.

10:17 [Pub][ePrint]

We study simulation-based, selective opening security against chosen-ciphertext attacks (SIM-SO-CCA security) for public key encryption (PKE). In a selective opening, chosen-ciphertext attack (SO-CCA), an adversary has access to a decryption oracle, sees a vector of ciphertexts, adaptively chooses to open some of them, and obtains the corresponding plaintexts and random coins used in the creation of the ciphertexts. The SIM-SO-CCA notion captures the security of unopened ciphertexts with respect to probabilistic polynomial-time (ppt) SO-CCA adversaries in a semantic way: what a ppt SO-CCA adversary can compute can also be simulated by a ppt simulator with access only to the opened messages. Building on techniques used to achieve weak deniable encryption and non-committing encryption, Fehr \\emph{et al.} (Eurocrypt 2010) presented an approach to constructing SIM-SO-CCA secure PKE from extended hash proof systems (EHPSs), collision-resistant hash functions and an information-theoretic primitive called Cross Authentication Codes (XACs). We generalize their approach by introducing a special type of Key Encapsulation Mechanism (KEM) and using it to build SIM-SO-CCA secure PKE. We investigate what properties are needed from the KEM to achieve SIM-SO-CCA security. We also give three instantiations of our construction. The first uses hash proof systems, the second relies on the $\\nnn$-Linear assumption, and the third uses indistinguishability obfuscation (iO) in combination with extracting, puncturable Pseudo-Random Functions in a similar way to Sahai and Waters (STOC 2014). Our results establish the existence of SIM-SO-CCA secure PKE assuming only the existence of one-way functions and iO. This result further highlights the simplicity and power of iO in constructing different cryptographic primitives.

2015-01-06
13:17 [Pub][ePrint]

The onion routing (OR) network Tor provides anonymity to its users by routing their encrypted traffic through three proxies (or nodes). The key cryptographic challenge, here, is to establish symmetric session keys using a secure key exchange between the anonymous users and the selected nodes. The Tor network currently employs a one-way authenticated key exchange (1W-AKE) protocol \'ntor\' for this purpose. Nevertheless, ntor as well as other known 1W-AKE protocols rely solely on some classical Diffie-Hellman (DH) type assumptions for their (forward) security, and thus privacy of Today\'s anonymous communication could not be ensured once quantum computers arrive.

In this paper, we demonstrate utility of quantum-secure lattice-based cryptography towards solving this problem for onion routing. In particular, we present a novel hybrid 1W-AKE protocol (HybridOR) that is secure under the lattice-based ring learning with error (ring-LWE) assumption as well as the gap DH assumption. Due to its hybrid design, HybridOR is not only resilient against quantum attacks but also at the same time allows the OR nodes to use the current DH public keys and subsequently requires no modification to the the current Tor public key infrastructure. Moreover, thanks to the recent progress in lattice-based cryptography in the form of efficient ring-based constructions, our protocol is also computationally more efficient than the currently employed 1W-AKE protocol ntor, and it only introduces small and manageable communication overhead to the Tor protocol.

2015-01-05
19:17 [Pub][ePrint]

We present techniques to construct constant bandwidth, client storage and server storage blowup Oblivious RAM schemes in the (single-server) client-server setting.

Crucially, our constructions \\emph{do not rely on Fully Homomorphic Encryption (FHE) or Somewhat Homomorphic Encryption (SWHE)}

but instead rely only on an public-key additive homomorphic encryption scheme such as the Paillier or Damg\\r{a}rd-Jurik Cryptosystem cryptosystem.

The key mechanism that we use to get constant bandwidth overhead is \\emph{layered encryption}: to perform an ORAM eviction operation, the server performs an oblivious permutation operation on the eviction candidate blocks \\emph{without sending any data blocks back to the client}.

After each permutation, each block that was involved in the permutation gets an additional layer of encryption.

Importantly, the bandwidth needed for this operation is independent of the data block size.

If layered encryption is combined with previous ORAM schemes na\\\"{i}vely, the number of layers grows unbounded (with the number of accesses made to the ORAM).

This blows up server storage and bandwidth (due to ciphertext blowup) as well as client computation (as the client must peel\'\' off all layers to get the underlying plaintext).

To address this challenge, we propose \\emph{Onion ORAM}, a new ORAM scheme that is designed and optimized to bound the number of encryption layers on each block to $\\tilde{O}(\\log N)$, where $N$ is the number of blocks in the ORAM---\\emph{i.e., independent of the number of ORAM accesses}.

Putting it together, with sufficiently large block size $B=\\Omega(k \\log^2 N \\log^2 \\log N)$~bits for a security parameter $k$, Onion ORAM achieves $O(B)$ bandwidth, $O(B)$ client storage and $O(BN)$ server storage--only a constant factor blowup in all the three metrics.

Using the Damg\\r{a}rd-Jurik cryptosystem as our underlying primitive, Onion ORAM achieves the aforementioned asymptotics for block sizes $B=\\Omega(\\log^5 N \\log^2 \\log N)$~bits and security against known attacks with complexity $O\\left(N^{\\omega(1)}\\right)$, superpolynomial in the security parameter.

19:17 [Pub][ePrint]

Protecting user data entails providing authenticated users access to their data.

The most prevalent and probably also the most feasible approach to the latter is by username and password.

With password breaches through server compromise now reaching billions of affected passwords, distributing the password files and user data

over multiple servers is not just a good idea, it is a dearly needed solution to a topical problem.

Threshold password-authenticated secret sharing (TPASS) protocols enable users to share secret data among a set of servers so that they can later recover that data using a single password.

No coalition of servers up to a certain threshold can learn anything about the data or perform an offline dictionary attack on the password.

Several TPASS protocols have appeared in the literature and one is even available commercially.

Although designed to tolerate corrupted servers,

unfortunately none of these protocols provide details let alone security proofs about the steps that need to be taken when a compromise actually occurs

and how to proceed.

Indeed, they consider static corruptions only which for instance does not model real world attacks by hackers.

We provide the first TPASS protocol that is provably secure against adaptive server corruptions.

Moreover, our protocol contains an efficient recovery procedure allowing one to re-initialize servers to recover from corruption.

We prove our protocol secure in the universal composability model where servers can be corrupted adaptively at any time; the users\' passwords and secrets remain safe as long as both servers are not corrupted at the same time.

Our protocol does not require random oracles but does assume that servers have certified public keys.

19:17 [Pub][ePrint]

We present Balloon, a forward-secure append-only persistent authenticated data structure. Balloon is designed for an initially trusted author that generates events to be stored in a data structure (the Balloon) kept by an untrusted server, and clients that query this server for events intended for them based on keys and snapshots. The data structure is persistent such that clients can query keys for the current or past versions of the data structure based upon snapshots, which are generated by the author as new events are inserted. The data structure is authenticated in the sense that the server can verifiably prove all operations with respect to snapshots created by the author. No event inserted into the data structure prior to the compromise of the author can be modified or deleted without detection.

Balloon supports efficient (non-)membership proofs and verifiable inserts by the author, enabling the author to verify the correctness of inserts without having to store a copy of the Balloon. We also sketch how to use Balloon to enable client-specific forward-secure author consistency. In case of author inconsistency, a client can make a publicly verifiable statement that shows that the author was inconsistent with respect to his events.

10:17 [Pub][ePrint]

At ASIACRYPT 2014, Bilgin et al. describe higher-order threshold implementations: a masking countermeasure claiming resistance against higher-order differential power analysis attacks. In this note, we point out that higher-order threshold implementations do not necessarily provide higher-order security. We give as counterexamples two concrete higher-order threshold implementations that exhibit a second order flaw.

10:17 [Pub][ePrint]

MDS codes and matrices are closely related to combinatorial objects like orthogonal arrays and multipermutations. Conventional

MDS codes and matrices were defined on finite fields, but several generalizations of this concept has been done up to now.

In this note, we give a criterion for verifying whether a map is MDS or not.

10:17 [Pub][ePrint]

Related-Key Attacks (RKAs) allow an adversary to observe the outcomes of a cryptographic primitive under not only its original secret key e.g., $s$, but also a sequence of modified keys $\\phi(s)$, where $\\phi$ is specified by the adversary from a class $\\Phi$ of so-called Related-Key Derivation (RKD) functions. This paper extends the notion of non-malleable Key Derivation Functions (nm-KDFs), introduced by Faust et al. (EUROCRYPT\'14), to \\emph{continuous} nm-KDFs. Continuous nm-KDFs have the ability to protect against any a-priori \\emph{unbounded} number of RKA queries, instead of just a single time tampering attack as in the definition of nm-KDFs. Informally, our continuous non-malleability captures the scenario where the adversary can tamper with the original secret key repeatedly and adaptively. We present a novel construction of continuous nm-KDF for any polynomials of bounded degree over a finite field. Essentially, our result can be extended to richer RKD function classes possessing properties of \\emph{high output entropy and input-output collision resistance}. The technical tool employed in the construction is the one-time lossy filter (Qin et al. ASIACRYPT\'13) which can be efficiently obtained under standard assumptions, e.g., DDH and DCR. We propose a framework for constructing $\\Phi$-RKA-secure IBE, PKE and signature schemes, using a continuous nm-KDF for the same $\\Phi$-class of RKD functions. Applying our construction of continuous nm-KDF to this framework, we obtain the first RKA-secure IBE, PKE and signature schemes for a class of polynomial RKD functions of bounded degree under \\emph{standard} assumptions. While previous constructions for the same class of RKD functions all rely on non-standard assumptions, e.g., $d$-extended DBDH assumption.

10:17 [Pub][ePrint]

In this paper we study the two fundamental functionalities oblivious polynomial evaluation in the exponent and set-intersection, and introduce a new technique for designing efficient secure protocols for these problems (and others). Our starting point is the [BenabbasGV11] technique (CRYPTO 2011) for verifiable delegation of polynomial evaluations, using algebraic PRFs. We use this tool, that is useful to achieve verifiability in the outsourced setting, in order to achieve privacy in the standard two-party setting. Our results imply new simple and efficient oblivious polynomial evaluation (OPE) protocols. We further show that our OPE protocols are readily used for secure set-intersection, implying much simpler protocols in the plain model. As a side result, we demonstrate the usefulness of algebraic PRFs for various search functionalities, such as keyword search and oblivious transfer with adaptive queries. Our protocols are secure under full simulation-based definitions in the presence of malicious adversaries.

2015-01-02
22:17 [Pub][ePrint]

We prove that finding a Nash equilibrium of a game is hard, assuming the existence of indistinguishability obfuscation and injective one-way functions with sub-exponential hardness. We do so by showing how these cryptographic primitives give rise to a hard computational problem that lies in the complexity class PPAD, for which finding Nash equilibrium is known to be complete.

Previous proposals for basing PPAD-hardness on program obfuscation considered a strong \"virtual black-box\" notion that is subject to severe limitations and is unlikely to be realizable for the programs in question. In contrast, for indistinguishability obfuscation no such limitations are known, and recently, several candidate constructions of indistinguishability obfuscation were suggested based on different hardness assumptions on multilinear maps.

Our result provides further evidence of the intractability of finding a Nash equilibrium, one that is extrinsic to the evidence presented so far.