International Association for Cryptologic Research

International Association
for Cryptologic Research

CryptoDB

Moni Naor

Affiliation: Weizmann Institute of Science

Publications

Year
Venue
Title
2019
JOFC
Hardness-Preserving Reductions via Cuckoo Hashing
The focus of this work is hardness-preserving transformations of somewhat limited pseudorandom functions families (PRFs) into ones with more versatile characteristics. Consider the problem of domain extension of pseudorandom functions: given a PRF that takes as input elements of some domain $$\mathcal {U}$$U, we would like to come up with a PRF over a larger domain. Can we do it with little work and without significantly impacting the security of the system? One approach is to first hash the larger domain into the smaller one and then apply the original PRF. Such a reduction, however, is vulnerable to a “birthday attack”: after $$\sqrt{\left| \mathcal {U}\right| }$$U queries to the resulting PRF, a collision (i.e., two distinct inputs having the same hash value) is very likely to occur. As a consequence, the resulting PRF is insecure against an attacker making this number of queries. In this work, we show how to go beyond the aforementioned birthday attack barrier by replacing the above simple hashing approach with a variant of cuckoo hashing, a hashing paradigm that resolves collisions in a table by using two hash functions and two tables, cleverly assigning each element to one of the two tables. We use this approach to obtain: (i) a domain extension method that requires just two calls to the original PRF can withstand as many queries as the original domain size, and has a distinguishing probability that is exponentially small in the amount of non-cryptographic work; and (ii) a security-preserving reduction from non-adaptive to adaptive PRFs.
2018
EUROCRYPT
2018
TCC
2018
TCC
The Security of Lazy Users in Out-of-Band Authentication
Moni Naor Lior Rotem Gil Segev
Faced with the threats posed by man-in-the-middle attacks, messaging platforms rely on “out-of-band” authentication, assuming that users have access to an external channel for authenticating one short value. For example, assuming that users recognizing each other’s voice can authenticate a short value, Telegram and WhatApp ask their users to compare 288-bit and 200-bit values, respectively. The existing protocols, however, do not take into account the plausible behavior of users who may be “lazy” and only compare parts of these values (rather than their entirety).Motivated by such a security-critical user behavior, we study the security of lazy users in out-of-band authentication. We start by showing that both the protocol implemented by WhatsApp and the statistically-optimal protocol of Naor, Segev and Smith (CRYPTO ’06) are completely vulnerable to man-in-the-middle attacks when the users consider only a half of the out-of-band authenticated value. In this light, we put forward a framework that captures the behavior and security of lazy users. Our notions of security consider both statistical security and computational security, and for each flavor we derive a lower bound on the tradeoff between the number of positions that are considered by the lazy users and the adversary’s forgery probability.Within our framework we then provide two authentication protocols. First, in the statistical setting, we present a transformation that converts any out-of-band authentication protocol into one that is secure even when executed by lazy users. Instantiating our transformation with a new refinement of the protocol of Naor et al. results in a protocol whose tradeoff essentially matches our lower bound in the statistical setting. Then, in the computational setting, we show that the computationally-optimal protocol of Vaudenay (CRYPTO ’05) is secure even when executed by lazy users – and its tradeoff matches our lower bound in the computational setting.
2017
JOFC
2016
CRYPTO
2016
CRYPTO
2016
TCC
2016
JOFC
2015
EPRINT
2015
EPRINT
2015
EPRINT
2015
TCC
2015
TCC
2015
CRYPTO
2015
ASIACRYPT
2014
CRYPTO
2014
EPRINT
2014
EPRINT
2014
EPRINT
2014
ASIACRYPT
2013
TCC
2010
EUROCRYPT
2009
TCC
2009
TCC
2009
ASIACRYPT
2009
CRYPTO
2008
TCC
2008
EPRINT
History-Independent Cuckoo Hashing
Moni Naor Gil Segev Udi Wieder
Cuckoo hashing is an efficient and practical dynamic dictionary. It provides expected amortized constant update time, worst case constant lookup time, and good memory utilization. Various experiments demonstrated that cuckoo hashing is highly suitable for modern computer architectures and distributed settings, and offers significant improvements compared to other schemes. In this work we construct a practical {\em history-independent} dynamic dictionary based on cuckoo hashing. In a history-independent data structure, the memory representation at any point in time yields no information on the specific sequence of insertions and deletions that led to its current content, other than the content itself. Such a property is significant when preventing unintended leakage of information, and was also found useful in several algorithmic settings. Our construction enjoys most of the attractive properties of cuckoo hashing. In particular, no dynamic memory allocation is required, updates are performed in expected amortized constant time, and membership queries are performed in worst case constant time. Moreover, with high probability, the lookup procedure queries only two memory entries which are independent and can be queried in parallel. The approach underlying our construction is to enforce a canonical memory representation on cuckoo hashing. That is, up to the initial randomness, each set of elements has a unique memory representation.
2007
EPRINT
Deterministic History-Independent Strategies for Storing Information on Write-Once Memories
Tal Moran Moni Naor Gil Segev
Motivated by the challenging task of designing ``secure'' vote storage mechanisms, we deal with information storage mechanisms that operate in extremely hostile environments. In such environments, the majority of existing techniques for information storage and for security are susceptible to powerful adversarial attacks. In this setting, we propose a mechanism for storing a set of at most $K$ elements from a large universe of size $N$ on write-once memories in a manner that does not reveal the insertion order of the elements. We consider a standard model for write-once memories, in which the memory is initialized to the all $0$'s state, and the only operation allowed is flipping bits from $0$ to $1$. Whereas previously known constructions were either inefficient (required $\Theta(K^2)$ memory), randomized, or employed cryptographic techniques which are unlikely to be available in hostile environments, we eliminate each of these undesirable properties. The total amount of memory used by the mechanism is linear in the number of stored elements and poly-logarithmic in the size of the universe of elements. In addition, we consider one of the classical distributed computing problems: conflict resolution in multiple-access channels. By establishing a tight connection with the basic building block of our mechanism, we construct the first deterministic and non-adaptive conflict resolution algorithm whose running time is optimal up to poly-logarithmic factors.
2006
CRYPTO
2006
CRYPTO
2006
EUROCRYPT
2006
EUROCRYPT
2006
JOFC
2006
EPRINT
The Complexity of Online Memory Checking
Moni Naor Guy Rothblum
Suppose you want to store a large file on a remote and unreliable server. You would like to verify that your file has not been corrupted, so you store a small private (randomized)``fingerprint'' of the file on your own computer. This is the setting for the well-studied authentication problem, and the size of the required private fingerprint is well understood. We study the problem of sub-linear authentication: suppose you would like to encode and store your file in a way that allows you to verify that it has not been corrupted, but without reading all of it. If you only want to read t bits of the file, how large does the size s of the fingerprint need to be? We define this problem formally, and show a tight lower bound on the relationship between s and t when the adversary is not computationally bounded, namely: s x t= Omega(n) where n is the file size. This is an easier case of the online memory checking problem, introduced by Blum, Evans, Gemmel, Kannan and Naor in 1991, and hence the same (tight) lower bound applies also to this problem. It was previously shown that when the adversary is not computationally bounded, under the assumption that one-way functions exist, it is possible to construct much better online memory checkers and sub-linear authentication schemes. We show that the existence of one-way functions is also a necessary condition: even slightly breaking the s x t= Omega(n) lower bound in a computational setting implies the existence of one-way functions.
2006
EPRINT
Tight Bounds for Unconditional Authentication Protocols in the Manual Channel and Shared Key Models
Moni Naor Gil Segev Adam Smith
We address the message authentication problem in two seemingly different communication models. In the first model, the sender and receiver are connected by an insecure channel and by a low-bandwidth auxiliary channel, that enables the sender to ``manually'' authenticate one short message to the receiver (for example, by typing a short string or comparing two short strings). We consider this model in a setting where no computational assumptions are made, and prove that for any $0 < \epsilon < 1$ there exists a $\log^* n$-round protocol for authenticating $n$-bit messages, in which only $2 \log(1 / \epsilon) + O(1)$ bits are manually authenticated, and any adversary (even computationally unbounded) has probability of at most $\epsilon$ to cheat the receiver into accepting a fraudulent message. Moreover, we develop a proof technique showing that our protocol is essentially optimal by providing a lower bound of $2 \log(1 / \epsilon) - O(1)$ on the required length of the manually authenticated string. The second model we consider is the traditional message authentication model. In this model the sender and the receiver share a short secret key; however, they are connected only by an insecure channel. We apply the proof technique above to obtain a lower bound of $2 \log(1 / \epsilon) - 2$ on the required Shannon entropy of the shared key. This settles an open question posed by Gemmell and Naor (CRYPTO '93). Finally, we prove that one-way functions are {\em necessary} (and sufficient) for the existence of protocols breaking the above lower bounds in the computational setting.
2005
CRYPTO
2005
EUROCRYPT
2005
TCC
2005
JOFC
2004
EUROCRYPT
2003
CRYPTO
2003
CRYPTO
2002
CRYPTO
2002
JOFC
2002
EPRINT
On Chosen Ciphertext Security of Multiple Encryptions
We consider the security of multiple and possibly related plaintexts in the context of a chosen ciphertext attack. That is the attacker in addition and concurrently to obtaining encryptions of multiple plaintexts under the same key, may issue encryption and decryption queries and partial information queries. Loosely speaking, an encryption scheme is considered secure under such attacks if all that the adversary can learn from such attacks about the selected plaintexts can be obtained from the corresponding partial information queries. The above definition extends the definition of semantic security under chosen ciphertext attacks (CCAs), which is also formulated in this work. The extension is in considering the security of multiple plaintexts rather than the security of a single plaintext. We prove that both these formulations are equivalent to the standard formulation of CCA, which refers to indistinguishability of encryptions. The good news is that any encryption scheme that is secure in the standard CCA sense is in fact secure in the extended model. The treatment holds both for public-key and private-key encryption schemes.
2001
CRYPTO
2001
EPRINT
Pseudo-Random Functions and Factoring
Factoring integers is the most established problem on which cryptographic primitives are based. This work presents an efficient construction of {\em pseudorandom functions} whose security is based on the intractability of factoring. In particular, we are able to construct efficient length-preserving pseudorandom functions where each evaluation requires only a {\em constant} number of modular multiplications per output bit. This is substantially more efficient than any previous construction of pseudorandom functions based on factoring, and matches (up to a constant factor) the efficiency of the best known factoring-based {\em pseudorandom bit generators}.
2001
EPRINT
Anti-persistence: History Independent Data Structures
Moni Naor Vanessa Teague
Many data structures give away much more information than they were intended to. Whenever privacy is important, we need to be concerned that it might be possible to infer information from the memory representation of a data structure that is not available through its ``legitimate'' interface. Word processors that quietly maintain old versions of a document are merely the most egregious example of a general problem. We deal with data structures whose current memory representation does not reveal their history. We focus on dictionaries, where this means revealing nothing about the order of insertions or deletions. Our first algorithm is a hash table based on open addressing, allowing $O(1)$ insertion and search. We also present a history independent dynamic perfect hash table that uses space linear in the number of elements inserted and has expected amortized insertion and deletion time $O(1)$. To solve the dynamic perfect hashing problem we devise a general scheme for history independent memory allocation. For fixed-size records this is quite efficient, with insertion and deletion both linear in the size of the record. Our variable-size record scheme is efficient enough for dynamic perfect hashing but not for general use. The main open problem we leave is whether it is possible to implement a variable-size record scheme with low overhead.
2001
EPRINT
Revocation and Tracing Schemes for Stateless Receivers
We deal with the problem of a center sending a message to a group of users such that some subset of the users is considered revoked and should not be able to obtain the content of the message. We concentrate on the stateless receiver case, where the users do not (necessarily) update their state from session to session. We present a framework called the Subset-Cover framework, which abstracts a variety of revocation schemes including some previously known ones. We provide sufficient conditions that guarantee the security of a revocation algorithm in this class. We describe two explicit Subset-Cover revocation algorithms; these algorithms are very flexible and work for any number of revoked users. The schemes require storage at the receiver of $\log N$ and $\frac{1}{2} \log^2 N$ keys respectively ($N$ is the total number of users), and in order to revoke $r$ users the required message lengths are of $r \log N$ and $2r$ keys respectively. We also provide a general traitor tracing mechanism that can be integrated with any Subset-Cover revocation scheme that satisfies a ``bifurcation property''. This mechanism does not need an a priori bound on the number of traitors and does not expand the message length by much compared to the revocation of the same set of traitors. The main improvements of these methods over previously suggested methods, when adapted to the stateless scenario, are: (1) reducing the message length to $O(r)$ regardless of the coalition size while maintaining a single decryption at the user's end (2) provide a seamless integration between the revocation and tracing so that the tracing mechanisms does not require any change to the revocation algorithm.
2001
EPRINT
Communication Complexity and Secure Function Evaluation
Moni Naor Kobbi Nissim
A secure function evaluation protocol allows two parties to jointly compute a function $f(x,y)$ of their inputs in a manner not leaking more information than necessary. A major result in this field is: ``any function $f$ that can be computed using polynomial resources can be computed securely using polynomial resources'' (where `resources' refers to communication and computation). This result follows by a general transformation from any circuit for $f$ to a secure protocol that evaluates $f$. Although the resources used by protocols resulting from this transformation are polynomial in the circuit size, they are much higher (in general) than those required for an insecure computation of $f$. For the design of efficient secure protocols we suggest two new methodologies, that differ with respect to their underlying computational models. In one methodology we utilize the communication complexity tree (or branching program) representation of $f$. We start with an efficient (insecure) protocol for $f$ and transform it into a secure protocol. In other words, ``any function $f$ that can be computed using communication complexity $c$ can be can be computed securely using communication complexity that is polynomial in $c$ and a security parameter''. The second methodology uses the circuit computing $f$, enhanced with look-up tables as its underlying computational model. It is possible to simulate any RAM machine in this model with polylogarithmic blowup. Hence it is possible to start with a computation of $f$ on a RAM machine and transform it into a secure protocol. We show many applications of these new methodologies resulting in protocols efficient either in communication or in computation. In particular, we exemplify a protocol for the ``millionaires problem'', where two participants want to compare their values but reveal no other information. Our protocol is more efficient than previously known ones in either communication or computation.
2000
ASIACRYPT
2000
CRYPTO
2000
EPRINT
Constructing Pseudo-Random Permutations with a Prescribed Structure
Moni Naor Omer Reingold
We show how to construct pseudo-random permutations that satisfy a certain cycle restriction, for example that the permutation be cyclic (consisting of one cycle containing all the elements) or an involution (a self-inverse permutation) with no fixed points. The construction can be based on any (unrestricted) pseudo-random permutation. The resulting permutations are defined succinctly and their evaluation at a given point is efficient. Furthermore, they enjoy a {\em fast forward} property, i.e. it is possible to iterate them at a very small cost.
1999
CRYPTO
1999
EUROCRYPT
1999
EPRINT
Concurrent Zero-Knowledge
One of the toughest challenges in designing cryptographic protocols is to design them so that they will remain secure even when composed. For example, concurrent executions of a zero-knowledge protocol by a single prover (with one or more verifiers) may leak information and may not be zero-knowledge in toto. In this work we: (1) Suggest time as a mechanism to design concurrent cryptographic protocols and in particular maintaining zero-knowledge under concurrent execution. (2) Introduce the notion of of Deniable Authentication and connect it to the problem of concurrent zero-knowledge. We do not assume global synchronization, however we assume an (alpha,beta) timing constraint: for any two processors $P_1$ and $P_2$, if $P_1$ measures alpha elapsed time on its local clock and $P_2$ measures beta elapsed time on its local clock, and $P_2$ starts after $P_1$ does, then $P_2$ will finish after $P_1$ does. We show that for an adversary controlling all the processors clocks (as well as their communication channels) but which is constrained by an (alpha,beta) constraint there exist four-round almost concurrent zero-knowledge interactive proofs and perfect concurrent zero-knowledge arguments for every language in NP. We also address the more specific problem of Deniable Authentication, for which we propose several particularly efficient solutions. Deniable Authentication is of independent interest, even in the sequential case; our concurrent solutions yield sequential solutions, without recourse to timing, i.e., in the standard model.
1999
JOFC
1998
CRYPTO
1998
CRYPTO
1998
EUROCRYPT
1998
EUROCRYPT
1998
JOFC
1998
EPRINT
Private Information Retrieval by Keywords
Private information retrieval (PIR) schemes enable a user to access one or more servers that hold copies of a database and {\em privately} retrieve parts of the $n$ bits of data stored in the database. This means that the queries give each individual database no partial information (in the information theoretic or computational sense) on the identity of the item retrieved by the user. All known PIR schemes assume that the user knows the {\em physical address} of the sought item. This is usually not the case when accessing a public database that is not managed by the user. Such databases are typically presented with keywords, which are then internally translated (at the database end) to physical addresses, using an appropriate search structure (for example, a hash table or a binary tree). In this note we describe a simple, modular way to privately access data by keywords. It combines {\em any} conventional search structure with {\em any} underlying PIR scheme (including single server schemes). The transformation requires no modification in the way that the search structure is maintained. Therefore the same database will support both private and regular (non private) searches.
1998
JOFC
1997
CRYPTO
1997
CRYPTO
1997
EPRINT
Visual Authentication and Identification
Moni Naor Benny Pinkas.
The problems of authentication and identification have received wide interest in cryptographic research. However, there has been no satisfactory solution for the problem of authentication by a human recipient who does not use any trusted computational device. The problem of authentication arises for example in the context of smartcard--human interaction, in particular in the context of electronic wallets. The problem of identification is ubiquitous in communication over insecure networks. This paper introduces visual authentication and visual identification methods, which are authentication and identification methods for human users based on visual cryptography. These methods are very natural and easy to use, and can be implemented using very common ``low tech'' technology. The methods we suggest are efficient in the sense that a single transparency can be used for several authentications or for several identifications. The visual authentication methods we suggest are not limited to authenticating textual messages, and can be used to authenticate any image. An important contribution of this paper is the introduction of a framework for proving the security of protocols in which humans take an active part. We rigorously prove the security of our schemes using this framework.
1996
EPRINT
Deniable Encryption
Consider a situation in which the transmission of encrypted messages is intercepted by an adversary who can later ask the sender to reveal the random choices (and also the secret key, if one exists) used in generating the ciphertext, thereby exposing the cleartext. An encryption scheme is <B>deniable</B> if the sender can generate `fake random choices' that will make the ciphertext `look like' an encryption of a different cleartext, thus keeping the real cleartext private. Analogous requirements can be formulated with respect to attacking the receiver and with respect to attacking both parties. In this paper we introduce deniable encryption and propose constructions of schemes with polynomial deniability. In addition to being interesting by itself, and having several applications, deniable encryption provides a simplified and elegant construction of <B>adaptively secure</B> multiparty computation.
1996
EPRINT
Visual Cryptography II: Improving the Contrast Via the Cover Base
Moni Naor Adi Shamir
In Eurocrypt'94 we proposed a a new type of cryptographic scheme, which can decode concealed images without any cryptographic computations, by placing two transparencies on top of each other and using the decoder's (human) visual systems. One of the drawback of that proposal was a loss in contrast: a black pixel is translated in the reconstruction into a black region, but a white pixel is translated into a grey region (half black and half white). In this paper we propose am alternative model for reconstruction with a different set of operation (which we call the ``Cover" semi-group) is proposed. In this model we are able to obtain a better contrast than is possible in the previous one. We prove tight bounds on the contrast as a function of the complexity of the scheme. We also show that for constructing k-out-n secret sharing schemes when n and k are greater than 2 the new method is not applicable.
1996
EPRINT
On the Construction of Pseudo-Random Permutations: Luby-Rackoff Revisited
Moni Naor Omer Reingold
Luby and Rackoff showed a method for constructing a pseudo-random permutation from a pseudo-random function. The method is based on composing four (or three for weakened security) so called Feistel permutations each of which requires the evaluation of a pseudo-random function. We reduce somewhat the complexity of the construction and simplify its proof of security by showing that two Feistel permutations are sufficient together with initial and final pair-wise independent permutations. The revised construction and proof provide a framework in which similar constructions may be brought up and their security can be easily proved. We demonstrate this by presenting some additional adjustments of the construction that achieve the following: 1. Reduce the success probability of the adversary. 2. Provide a construction of pseudo-random permutations with large input size using pseudo-random functions with small input size. 3. Provide a construction of a pseudo-random permutation using a single pseudo-random function.
1996
JOFC
1994
CRYPTO
1994
CRYPTO
1994
EUROCRYPT
1993
CRYPTO
1993
CRYPTO
1992
CRYPTO
1992
CRYPTO
1992
CRYPTO
1991
JOFC
1989
CRYPTO
1988
CRYPTO

Program Committees

TCC 2015
TCC 2008
Eurocrypt 2007
Crypto 2006
Crypto 2005
TCC 2004
Eurocrypt 2003
PKC 2003
Asiacrypt 2000
Eurocrypt 2000
Crypto 1997
Crypto 1995
Crypto 1991