## CryptoDB

### Ueli Maurer

#### Publications

**Year**

**Venue**

**Title**

2021

PKC

Revisiting (R)CCA Security and Replay Protection
📺
Abstract

This paper takes a fresh approach to systematically characterizing, comparing, and understanding CCA-type security definitions for public-key encryption (PKE), a topic with a long history. The justification for a concrete security definition X is relative to a benchmark application (e.g. confidential communication): Does the use of a PKE scheme satisfying X imply the security of the application? Because unnecessarily strong definitions may lead to unnecessarily inefficient schemes or unnecessarily strong computational assumptions, security definitions should be as weak as possible, i.e. as close as possible to (but above) the benchmark. Understanding the hierarchy of security definitions, partially ordered by the implication (i.e. at least as strong) relation, is hence important, as is placing the relevant applications as benchmark levels within the hierarchy.
CCA-2 security is apparently the strongest notion, but because it is arguably too strong, Canetti, Krawczyk, and Nielsen (Crypto 2003) proposed the relaxed notions of Replayable CCA security (RCCA) as perhaps the weakest meaningful definition, and they investigated the space between CCA and RCCA security by proposing two versions of Detectable RCCA (d-RCCA) security which are meant to ensure that replays of ciphertexts are either publicly or secretly detectable (and hence preventable).
The contributions of this paper are three-fold. First, following the work of Coretti, Maurer, and Tackmann (Asiacrypt 2013), we formalize the three benchmark applications of PKE that serve as the natural motivation for security notions, namely the construction of certain types of (possibly replay-protected) confidential channels (from an insecure and an authenticated communication channel). Second, we prove that RCCA does not achieve the confidentiality benchmark and, contrary to previous belief, that the proposed d-RCCA notions are not even relaxations of CCA-2 security. Third, we propose the natural security notions corresponding to the three benchmarks: an appropriately strengthened version of RCCA to ensure confidentiality, as well as two notions for capturing public and secret replay detectability.

2021

ASIACRYPT

Giving an Adversary Guarantees (Or: How to Model Designated Verifier Signatures in a Composable Framework)
Abstract

When defining a security notion, one typically specifies what dishonest parties cannot achieve.
For example, communication is confidential if a third party cannot learn anything about the messages being transmitted, and it is authentic if a third party cannot impersonate the real (honest) sender.
For certain applications, however, security crucially relies on giving dishonest parties certain capabilities.
As an example, in Designated Verifier Signature (DVS) schemes, one captures that only the designated verifier can be convinced of the authenticity of a message by guaranteeing that any dishonest party can forge signatures which look indistinguishable (to a third party) from original ones created by the sender.
However, composable frameworks cannot typically model such guarantees as they are only designed to bound what a dishonest party can do.
In this paper we show how to model such guarantees---that dishonest parties must have some capability---in the Constructive Cryptography (CC) framework (Maurer and Renner, ICS 2011).
More concretely, we give the first composable security definitions for Multi-Designated Verifier Signature (MDVS) schemes---a generalization of DVS schemes.
The ideal world is defined as the intersection of two worlds.
The first captures authenticity in the usual way.
The second provides the guarantee that a dishonest party can forge signatures.
By taking the intersection we have an ideal world with the desired properties.
We also compare our composable definitions to existing security notions for MDVS schemes from the literature.
We find that only recently, 23 years after the introduction of MDVS schemes, sufficiently strong security notions were introduced capturing the security of MDVS schemes (Damg{\r a}rd et al., TCC 2020).
As we prove, however, these notions are still strictly stronger than necessary.

2021

TCC

Adaptive Security of Multi-Party Protocols, Revisited
Abstract

The goal of secure multi-party computation (MPC) is to allow a set of parties to perform an arbitrary computation task, where the security guarantees depend on the set of parties that are corrupted. The more parties are corrupted, the less is guaranteed, and typically the guarantees are completely lost when the number of corrupted parties exceeds a certain corruption bound.
Early and also many recent protocols are only statically secure in the sense that they provide no security guarantees if the adversary is allowed to choose adaptively which parties to corrupt. Security against an adversary with such a strong capability is often called adaptive security and a significant body of literature is devoted to achieving adaptive security, which is known as a difficult problem. In particular, a main technical obstacle in this context is the so-called ``commitment problem'', where the simulator is unable to consistently explain the internal state of a party with respect to its pre-corruption outputs. As a result, protocols typically resort to the use of cryptographic primitives like non-committing encryption, incurring a substantial efficiency loss.
This paper provides a new, clean-slate treatment of adaptive security in MPC, exploiting the specification concept of constructive cryptography (CC). A new natural security notion, called \cc-adaptive security, is proposed, which is technically weaker than standard adaptive security but nevertheless captures security against a fully adaptive adversary. Known protocol examples separating between adaptive and static security are also insecure in our notion. Moreover, our notion avoids the commitment problem and thereby the need to use non-committing or equivocal tools.
We exemplify this by showing that the protocols by Cramer, Damgard and Nielsen (EUROCRYPT'01) for the honest majority setting, and (the variant without non-committing encryption) by Canetti, Lindell, Ostrovsky and Sahai (STOC'02) for the dishonest majority setting, achieve \cc-adaptive security. The latter example is of special interest since all \uc-adaptive protocols in the dishonest majority setting require some form of non-committing encryption or equivocal tools.

2021

TCC

Generalized Proofs of Knowledge with Fully Dynamic Setup
Abstract

Proofs of knowledge (PoK) are one of the most fundamental notions in cryptography. The appeal of this notion is that it provides a general template that an application can suitably instantiate by choosing a specific relation.
Nonetheless, several important applications have been brought to light, including proofs-of-ownership of files or two-factor authentication, which do not fit the PoK template but naturally appear to be special cases of a more general notion of proofs of knowledge or possession. One would thus expect that their security properties, in particular privacy and soundness, are simply derived as concrete instantiation of a common generalized PoK concept with well understood security semantics. Unfortunately, such a notion does not exist, resulting in a variety of tailor-made security definitions whose plausibility must be checked on a case-by-case basis.
In this work, we close this gap by providing the theoretical foundations of a generalized notion of PoK that encompasses dynamic and setup-dependent relations as well as interactive statement derivations. This novel combination enables an application to directly specify relations that depend on an assumed setup, such as a random oracle, a database or ledger, and to have statements be agreed upon interactively and dynamically between parties based on the state of the setup.
Our new notion is called \emph{agree-and-prove} and provides clear semantics of correctness, soundness, and zero-knowledge in the above generalized scenario.
As an application, we first consider proofs-of-ownership of files for client-side file deduplication. We cast the problem and some of its prominent schemes in our agree-and-prove framework and formally analyze their security.
Leveraging our generic zero-knowledge formalization, we then devise a novel scheme that is provably the privacy-preserving analogue of the well-known Merkle-Tree based protocol. As a second application, we consider two-factor entity authentication to showcase how the agree-and-prove notion encompasses proofs of ability, such as proving the correct usage of an abstract hardware token.

2021

TCC

Direct Product Hardness Amplification
Abstract

We revisit one of the most fundamental hardness amplification constructions, originally proposed by Yao (FOCS 1982). We present a hardness amplification theorem for the direct product of certain games that is simpler, more general, and stronger than previously known hardness amplification theorems of the same kind. Our focus is two-fold. First, we aim to provide close-to-optimal concrete bounds, as opposed to asymptotic ones. Second, in the spirit of abstraction and reusability, our goal is to capture the essence of direct product hardness amplification as generally as possible. Furthermore, we demonstrate how our amplification theorem can be applied to obtain hardness amplification results for non-trivial interactive cryptographic games such as MAC forgery or signature forgery games.

2020

PKC

Topology-Hiding Computation for Networks with Unknown Delays
📺
Abstract

Topology-Hiding Computation (THC) allows a set of parties to securely compute a function over an incomplete network without revealing information on the network topology. Since its introduction in TCC’15 by Moran et al., the research on THC has focused on reducing the communication complexity, allowing larger graph classes, and tolerating stronger corruption types. All of these results consider a fully synchronous model with a known upper bound on the maximal delay of all communication channels. Unfortunately, in any realistic setting this bound has to be extremely large, which makes all fully synchronous protocols inefficient. In the literature on multi-party computation, this is solved by considering the fully asynchronous model. However, THC is unachievable in this model (and even hard to define), leaving even the definition of a meaningful model as an open problem. The contributions of this paper are threefold. First, we introduce a meaningful model of unknown and random communication delays for which THC is both definable and achievable. The probability distributions of the delays can be arbitrary for each channel, but one needs to make the (necessary) assumption that the delays are independent. The existing fully-synchronous THC protocols do not work in this setting and would, in particular, leak information about the topology. Second, in the model with trusted stateless hardware boxes introduced at Eurocrypt’18 by Ball et al., we present a THC protocol that works for any graph class. Third, we explore what is achievable in the standard model without trusted hardware and present a THC protocol for specific graph types (cycles and trees) secure under the DDH assumption. The speed of all protocols scales with the actual (unknown) delay times, in contrast to all previously known THC protocols whose speed is determined by the assumed upper bound on the network delay.

2020

CRYPTO

Overcoming Impossibility Results in Composable Security using Interval-Wise Guarantees
📺
Abstract

Composable security definitions, at times called simulation-based definitions, provide strong security guarantees that hold in any context. However, they are also met with some skepticism due to many impossibility results; goals such as commitments and zero-knowledge that are achievable in a stand-alone sense were shown to be unachievable composably (without a setup) since provably no efficient simulator exists. In particular, in the context of adaptive security, the so-called "simulator commitment problem" arises: once a party gets corrupted, an efficient simulator is unable to be consistent with its pre-corruption outputs. A natural question is whether such impossibility results are unavoidable or only artifacts of frameworks being too restrictive.
In this work, we propose a novel type of composable security statement that evades the commitment problem. Our new type is able to express the composable guarantees of schemes that previously did not have a clear composable understanding. To this end, we leverage the concept of system specifications in the Constructive Cryptography framework, capturing the conjunction of several interval-wise guarantees, each specifying the guarantees between two events. We develop the required theory and present the corresponding new composition theorem.
We present three applications of our theory. First, we show in the context of symmetric encryption with adaptive corruption how our notion naturally captures the expected confidentiality guarantee---the messages remain confidential until either party gets corrupted---and that it can be achieved by any standard semantically secure scheme (negating the need for non-committing encryption). Second, we present a composable formalization of (so far only known to be standalone secure) commitment protocols, which is instantiable without a trusted setup like a CRS. We show it to be sufficient for being used in coin tossing over the telephone, one of the early intuitive applications of commitments. Third, we reexamine a result by Hofheinz, Matt, and Maurer [Asiacrypt'15] implying that IND-ID-CPA security is not the right notion for identity-based encryption, unmasking this claim as an unnecessary framework artifact.

2020

TCC

Coupling of Random Systems
📺
Abstract

This paper makes three contributions. First, we present a simple theory of random systems. The main idea is to think of a probabilistic system as an equivalence class of distributions over deterministic systems. Second, we demonstrate how in this new theory, the optimal
information-theoretic distinguishing advantage between two systems can be characterized merely in terms of the statistical distance of probability distributions, providing a more elementary understanding of the distance of systems. In particular, two systems that are epsilon-close in terms of the best distinguishing advantage can be understood as being equal with probability 1-epsilon, a property that holds statically, without even considering a distinguisher, let alone its interaction with the systems. Finally, we exploit this new characterization of the distinguishing advantage to prove that any threshold combiner is an amplifier for indistinguishability in the information-theoretic setting, generalizing and simplifying results from Maurer, Pietrzak, and Renner (CRYPTO 2007).

2020

TCC

Synchronous Constructive Cryptography
📺
Abstract

This paper proposes a simple synchronous composable security framework as an instantiation of the Constructive Cryptography framework, aiming to capture minimally, without unnecessary artefacts, exactly what is needed to state synchronous security guarantees. The objects of study are specifications (i.e., sets) of systems, and traditional security properties like consistency and validity can naturally be understood as specifications, thus unifying composable and property-based definitions. The framework's simplicity is in contrast to current composable frameworks for synchronous computation which are built on top of an asynchronous framework (e.g. the UC framework), thus not only inheriting artefacts and complex features used to handle asynchronous communication, but adding additional overhead to capture synchronous communication.
As a second, independent contribution we demonstrate how secure (synchronous) multi-party computation protocols can be understood as constructing a computer that allows a set of parties to perform an arbitrary, on-going computation. An interesting aspect is that the instructions of the computation need not be fixed before the protocol starts but can also be determined during an on-going computation, possibly depending on previous outputs.

2020

ASIACRYPT

MPC with Synchronous Security and Asynchronous Responsiveness
📺
Abstract

Two paradigms for secure MPC are synchronous and asynchronous
protocols. While synchronous protocols tolerate more corruptions and allow every party to give its input, they are very slow because the speed depends on the conservatively assumed worst-case delay $\Delta$ of the network. In contrast, asynchronous protocols allow parties to obtain output as fast as the actual network allows, a property called \emph{responsiveness}, but unavoidably have lower resilience and parties with slow network connections cannot give input.
It is natural to wonder whether it is possible to leverage synchronous MPC protocols to achieve responsiveness, hence obtaining the advantages of both paradigms: full security with responsiveness up to t corruptions, and 'extended' security (full security or security with unanimous abort) with no responsiveness up to a larger threshold T of corruptions. We settle the question by providing matching feasibility and impossibility results:
-For the case of unanimous abort as extended security, there is an MPC protocol if and only if T + 2t < n.
-For the case of full security as extended security, there is an MPC protocol if and only if T < n/2 and T + 2t < n. In particular, setting t = n/4 allows to achieve a fully secure MPC for honest majority, which in addition benefits from having substantial responsiveness.

2020

JOFC

Non-malleable Encryption: Simpler, Shorter, Stronger
Abstract

One approach toward basing public-key encryption (PKE) schemes on weak and credible assumptions is to build “stronger” or more general schemes generically from “weaker” or more restricted ones. One particular line of work in this context was initiated by Myers and Shelat (FOCS ’09) and continued by Hohenberger, Lewko, and Waters (Eurocrypt ’12), who provide constructions of multi-bit CCA-secure PKE from single-bit CCA-secure PKE. It is well known that encrypting each bit of a plaintext string independently is not CCA-secure—the resulting scheme is malleable . We therefore investigate whether this malleability can be dealt with using the conceptually simple approach of applying a suitable non-malleable code (Dziembowski et al., ICS ’10) to the plaintext and subsequently encrypting the resulting codeword bit by bit. We find that an attacker’s ability to ask multiple decryption queries requires that the underlying code be continuously non-malleable (Faust et al., TCC ’14). Since, as we show, this flavor of non-malleability can only be achieved if the code is allowed to “self-destruct,” the resulting scheme inherits this property and therefore only achieves a weaker variant of CCA security. We formalize this new notion of so-called indistinguishability under self-destruct attacks (IND-SDA) as CCA security with the restriction that the decryption oracle stops working once the attacker submits an invalid ciphertext. We first show that the above approach based on non-malleable codes yields a solution to the problem of domain extension for IND-SDA-secure PKE, provided that the underlying code is continuously non-malleable against (a reduced form of) bit-wise tampering. Then, we prove that the code of Dziembowski et al. is actually already continuously non-malleable against bit-wise tampering. We further investigate the notion of security under self-destruct attacks and combine IND-SDA security with non-malleability under chosen-ciphertext attacks (NM-CPA) to obtain the strictly stronger notion of non-malleability under self-destruct attacks (NM-SDA) . We show that NM-SDA security can be obtained from basic IND-CPA security by means of a black-box construction based on the seminal work by Choi et al. (TCC ’08). Finally, we provide a domain extension technique for building a multi-bit NM-SDA scheme from a single-bit NM-SDA scheme. To achieve this goal, we define and construct a novel type of continuous non-malleable code, called secret-state NMC , since, as we show, standard continuous NMCs are insufficient for the natural “encode-then-encrypt-bit-by-bit” approach to work.

2019

EUROCRYPT

Efficient Ratcheting: Almost-Optimal Guarantees for Secure Messaging
Abstract

In the era of mass surveillance and information breaches, privacy of Internet communication, and messaging in particular, is a growing concern. As secure messaging protocols are executed on the not-so-secure end-user devices, and because their sessions are long-lived, they aim to guarantee strong security even if secret states and local randomness can be exposed.The most basic security properties, including forward secrecy, can be achieved using standard techniques such as authenticated encryption. Modern protocols, such as Signal, go one step further and additionally provide the so-called backward secrecy, or healing from state exposures. These additional guarantees come at the price of a moderate efficiency loss (they require public-key primitives).On the opposite side of the security spectrum are the works by Jaeger and Stepanovs and by Poettering and Rösler, which characterize the optimal security a secure-messaging scheme can achieve. However, their proof-of-concept constructions suffer from an extreme efficiency loss compared to Signal. Moreover, this caveat seems inherent.This paper explores the area in between: our starting point are the basic, efficient constructions, and then we ask how far we can go towards the optimal security without losing too much efficiency. We present a construction with guarantees much stronger than those achieved by Signal, and slightly weaker than optimal, yet its efficiency is closer to that of Signal (only standard public-key cryptography is used).On a technical level, achieving optimal guarantees inherently requires key-updating public-key primitives, where the update information is allowed to be public. We consider secret update information instead. Since a state exposure temporally breaks confidentiality, we carefully design such secretly-updatable primitives whose security degrades gracefully if the supposedly secret update information leaks.

2019

TCC

Composable and Finite Computational Security of Quantum Message Transmission
Abstract

Recent research in quantum cryptography has led to the development of schemes that encrypt and authenticate quantum messages with computational security. The security definitions used so far in the literature are asymptotic, game-based, and not known to be composable. We show how to define finite, composable, computational security for secure quantum message transmission. The new definitions do not involve any games or oracles, they are directly operational: a scheme is secure if it transforms an insecure channel and a shared key into an ideal secure channel from Alice to Bob, i.e., one which only allows Eve to block messages and learn their size, but not change them or read them. By modifying the ideal channel to provide Eve with more or less capabilities, one gets an array of different security notions. By design these transformations are composable, resulting in composable security.Crucially, the new definitions are finite. Security does not rely on the asymptotic hardness of a computational problem. Instead, one proves a finite reduction: if an adversary can distinguish the constructed (real) channel from the ideal one (for some fixed security parameters), then she can solve a finite instance of some computational problem. Such a finite statement is needed to make security claims about concrete implementations.We then prove that (slightly modified versions of) protocols proposed in the literature satisfy these composable definitions. And finally, we study the relations between some game-based definitions and our composable ones. In particular, we look at notions of quantum authenticated encryption and $$\mathsf{QCCA2}$$, and show that they suffer from the same issues as their classical counterparts: they exclude certain protocols which are arguably secure.

2019

TCC

A Unified and Composable Take on Ratcheting
Abstract

Ratcheting, an umbrella term for certain techniques for achieving secure messaging with strong guarantees, has spurred much interest in the cryptographic community, with several novel protocols proposed as of lately. Most of them are composed from several sub-protocols, often sharing similar ideas across different protocols. Thus, one could hope to reuse the sub-protocols to build new protocols achieving different security, efficiency, and usability trade-offs. This is especially desirable in view of the community’s current aim for group messaging, which has a significantly larger design space. However, the underlying ideas are usually not made explicit, but rather implicitly encoded in a (fairly complex) security game, primarily targeted at the overall security proof. This not only hinders modular protocol design, but also makes the suitability of a protocol for a particular application difficult to assess.In this work we demonstrate that ratcheting components can be modeled in a composable framework, allowing for their reuse in a modular fashion. To this end, we first propose an extension of the Constructive Cryptography framework by so-called global event histories, to allow for a clean modularization even if the component modules are not fully independent but actually subtly intertwined, as in most ratcheting protocols. Second, we model a unified, flexibly instantiable type of strong security statement for secure messaging within that framework. Third, we show that one can phrase strong guarantees for a number of sub-protocols from the existing literature in this model with only minor modifications, slightly stronger assumptions, and reasonably intuitive formalizations.When expressing existing protocols’ guarantees in a simulation-based framework, one has to address the so-called commitment problem. We do so by reflecting the removal of access to certain oracles under specific conditions, appearing in game-based security definitions, in the real world of our composable statements. We also propose a novel non-committing protocol for settings where the number of messages a party can send before receiving a reply is bounded.

2018

PKC

On Composable Security for Digital Signatures
Abstract

A digital signature scheme (DSS), which consists of a key-generation, a signing, and a verification algorithm, is an invaluable tool in cryptography. The first and still most widely used security definition for a DSS, existential unforgeability under chosen-message attack, was introduced by Goldwasser, Micali, and Rivest in 1988.As DSSs serve as a building block in numerous complex cryptographic protocols, a security definition that specifies the guarantees of a DSS under composition is needed. Canetti (FOCS 2001, CSFW 2004) as well as Backes, Pfitzmann, and Waidner (CCS 2003) have described ideal functionalities for signatures in their respective composable-security frameworks. While several variants of these functionalities exist, they all share that the verification key and signature values appear explicitly.In this paper, we describe digital signature schemes from a different, more abstract perspective. Instead of modeling all aspects of a DSS in a monolithic ideal functionality, our approach characterizes a DSS as a construction of a repository for authentically reading values written by a certain party from certain assumed repositories, e.g., for transmitting verification key and signature values. This approach resolves several technical complications of previous simulation-based approaches, captures the security of signature schemes in an abstract way, and allows for modular proofs.We show that our definition is equivalent to existential unforgeability. We then model two example applications: (1) the certification of values via a signature from a specific entity, which with public keys as values is the core functionality of public-key infrastructures, and (2) the authentication of a session between a client and a server with the help of a digitally signed assertion from an identity provider. Single-sign-on mechanisms such as SAML rely on the soundness of the latter approach.

2018

TCC

Information-Theoretic Secret-Key Agreement: The Asymptotically Tight Relation Between the Secret-Key Rate and the Channel Quality Ratio
Abstract

Information-theoretic secret-key agreement between two parties Alice and Bob is a well-studied problem that is provably impossible in a plain model with public (authenticated) communication, but is known to be possible in a model where the parties also have access to some correlated randomness. One particular type of such correlated randomness is the so-called satellite setting, where uniform random bits (e.g., sent by a satellite) are received by the parties and the adversary Eve over inherently noisy channels. The antenna size determines the error probability, and the antenna is the adversary’s limiting resource much as computing power is the limiting resource in traditional complexity-based security. The natural assumption about the adversary is that her antenna is at most Q times larger than both Alice’s and Bob’s antenna, where, to be realistic, Q can be very large.The goal of this paper is to characterize the secret-key rate per transmitted bit in terms of Q. Traditional results in this so-called satellite setting are phrased in terms of the error probabilities $$\epsilon _A$$ϵA, $$\epsilon _B$$ϵB, and $$\epsilon _E$$ϵE, of the binary symmetric channels through which the parties receive the bits and, quite surprisingly, the secret-key rate has been shown to be strictly positive unless Eve’s channel is perfect ($$\epsilon _E=0$$ϵE=0) or either Alice’s or Bob’s channel output is independent of the transmitted bit (i.e., $$\epsilon _A=0.5$$ϵA=0.5 or $$\epsilon _B=0.5$$ϵB=0.5). However, the best proven lower bound, if interpreted in terms of the channel quality ratio Q, is only exponentially small in Q. The main result of this paper is that the secret-key rate decreases asymptotically only like $$1/Q^2$$1/Q2 if the per-bit signal energy, affecting the quality of all channels, is treated as a system parameter that can be optimized. Moreover, this bound is tight if Alice and Bob have the same antenna sizes.Motivated by considering a fixed sending signal power, in which case the per-bit energy is inversely proportional to the bit-rate, we also propose a definition of the secret-key rate per second (rather than per transmitted bit) and prove that it decreases asymptotically only like 1/Q.

2018

TCC

Topology-Hiding Computation Beyond Semi-Honest Adversaries
Abstract

Topology-hiding communication protocols allow a set of parties, connected by an incomplete network with unknown communication graph, where each party only knows its neighbors, to construct a complete communication network such that the network topology remains hidden even from a powerful adversary who can corrupt parties. This communication network can then be used to perform arbitrary tasks, for example secure multi-party computation, in a topology-hiding manner. Previously proposed protocols could only tolerate passive corruption. This paper proposes protocols that can also tolerate fail-corruption (i.e., the adversary can crash any party at any point in time) and so-called semi-malicious corruption (i.e., the adversary can control a corrupted party’s randomness), without leaking more than an arbitrarily small fraction of a bit of information about the topology. A small-leakage protocol was recently proposed by Ball et al. [Eurocrypt’18], but only under the unrealistic set-up assumption that each party has a trusted hardware module containing secret correlated pre-set keys, and with the further two restrictions that only passively corrupted parties can be crashed by the adversary, and semi-malicious corruption is not tolerated. Since leaking a small amount of information is unavoidable, as is the need to abort the protocol in case of failures, our protocols seem to achieve the best possible goal in a model with fail-corruption.Further contributions of the paper are applications of the protocol to obtain secure MPC protocols, which requires a way to bound the aggregated leakage when multiple small-leakage protocols are executed in parallel or sequentially. Moreover, while previous protocols are based on the DDH assumption, a new so-called PKCR public-key encryption scheme based on the LWE assumption is proposed, allowing to base topology-hiding computation on LWE. Furthermore, a protocol using fully-homomorphic encryption achieving very low round complexity is proposed.

2013

CRYPTO

2013

ASIACRYPT

2011

ASIACRYPT

2009

EPRINT

Cascade Encryption Revisited
Abstract

The security of cascade blockcipher encryption is an important and well-studied problem in theoretical cryptography with practical implications. It is well-known that double encryption improves the security only marginally, leaving triple encryption as the shortest reasonable cascade. In a recent paper, Bellare and Rogaway showed that in the ideal cipher model, triple encryption is significantly more secure than single and double encryption, stating the security of longer cascades as an open question.
In this paper, we propose a new lemma on the indistinguishability of systems extending Maurer's theory of random systems. In addition to being of independent interest, it allows us to compactly rephrase Bellare and Rogaway's proof strategy in this framework, thus making the argument more abstract and hence easy to follow. As a result, this allows us to address the security of longer cascades as well as some errors in their paper. Our result implies that for blockciphers with smaller key space than message space (e.g. DES), longer cascades improve the security of the encryption up to a certain limit. This partially answers the open question mentioned above.

2009

EPRINT

Combining Computational and Information-Theoretic Security in Multi-Party Computation
Abstract

Most protocols for multi-party computation (MPC) are secure either
against information-theoretic (IT) or against computationally bounded
adversaries. In this work, we bring together the best of both worlds:
For any robustness parameter $\rob<\frac{n}{2}$ we obtain one MPC
protocol that is simultaneously IT secure with robustness for up to
$t\leq\rob$ actively corrupted parties, IT secure with fairness (no
robustness) for up to $t<\frac{n}{2}$ and computationally secure with
agreement on abort (no fairness) for up to $t<n-\rob$. Our
construction is secure in the universal composability (UC) framework,
and achieves the bounds of Ishai et al. [CRYPTO'06], Katz [STOC'07],
and Cleve [STOC'86] on trade-offs between robustness and privacy, and
on fairness.
For example, for the special case $\rob=0$ our protocol simultaneously
achieves non-robust MPC for up to $t<n$ corrupted parties in the
computational setting (like Goldreich et al. [STOC'87]) while
providing security with fairness in the IT setting for up to
$t<\frac{n}{2}$ corrupted parties (like Rabin and Ben-Or [STOC'89]
though without robustness).

2009

CRYPTO

2008

EPRINT

FACTORING IS EQUIVALENT TO GENERIC RSA
Abstract

We show that a generic ring algorithm for breaking RSA in
$\mathbb{Z}_N$ can be converted into an algorithm for factoring the
corresponding RSA-modulus $N$. Our results imply that any attempt at
breaking RSA without factoring $N$ will be non-generic and hence
will have to manipulate the particular bit-representation of the
input in $\mathbb{Z}_N$. This provides new evidence that breaking
RSA may be equivalent to factoring the modulus.

2008

EPRINT

Breaking RSA Generically is Equivalent to Factoring
Abstract

We show that a generic ring algorithm for breaking RSA in
$\mathbb{Z}_N$ can be converted into an algorithm for factoring the
corresponding RSA-modulus $N$. Our results imply that any attempt at
breaking RSA without factoring $N$ will be non-generic and hence
will have to manipulate the particular bit-representation of the
input in $\mathbb{Z}_N$. This provides new evidence that breaking RSA may be equivalent to factoring the modulus.

2008

ASIACRYPT

2007

ASIACRYPT

2007

EPRINT

MPC vs. SFE: Perfect Security in a Unified Corruption Model
Abstract

Secure function evaluation (SFE) allows a set of players to compute
an arbitrary agreed function of their private inputs, even if an
adversary may corrupt some of the players. Secure multi-party
computation (MPC) is a generalization allowing to perform an
arbitrary on-going (also called reactive or stateful) computation
during which players can receive outputs and provide new inputs at
intermediate stages.
At Crypto~2006, Ishai \emph{et al.} considered mixed threshold
adversaries that either passively corrupt some fixed number of players,
or, alternatively, actively corrupt some (smaller) fixed number of
players, and showed that for certain thresholds, cryptographic SFE is
possible, whereas cryptographic MPC is not.
However, this separation does not occur when one considers
\emph{perfect} security. Actually, past work suggests that no such
separation exists, as all known general protocols for perfectly secure SFE
can also be used for MPC. Also, such a separation does not show up with
\emph{general adversaries}, characterized by a collection of corruptible
subsets of the players, when considering passive and active corruption.
In this paper, we study the most general corruption model where the
adversary is characterized by a collection of adversary classes, each
specifying the subset of players that can be actively, passively, or
fail-corrupted, respectively, and show that in this model, perfectly
secure MPC separates from perfectly secure SFE. Furthermore, we derive
the exact conditions on the adversary structure for the existence of
perfectly secure SFE resp.~MPC, and provide efficient protocols for both
cases.

2007

EPRINT

Black-Box Extension Fields and the Inexistence of Field-Homomorphic One-Way Permutations
Abstract

The black-box field (BBF) extraction problem is, for a given field
$\F$, to determine a secret field element hidden in a black-box which
allows to add and multiply values in $\F$ in the box and which reports
only equalities of elements in the box. This problem is of
cryptographic interest for two reasons. First, for $\F=\F_p$ it
corresponds to the generic reduction of the discrete logarithm problem
to the computational Diffie-Hellman problem in a group of prime order
$p$. Second, an efficient solution to the BBF problem proves the
inexistence of certain field-homomorphic encryption schemes whose
realization is an interesting open problems in algebra-based
cryptography. BBFs are also of independent interest in computational
algebra.
In the previous literature, BBFs had only been considered for the
prime field case. In this paper we consider a generalization of the
extraction problem to BBFs that are extension fields. More precisely
we discuss the representation problem defined as follows: For given
generators $g_1,\ldots,g_d$ algebraically generating a BBF and an
additional element $x$, all hidden in a black-box, express $x$
algebraically in terms of $g_1,\ldots,g_d$. We give an efficient
algorithm for this representation problem and related problems for
fields with small characteristic (e.g. $\F=\F_{2^n}$ for some $n$). We
also consider extension fields of large characteristic and show how to
reduce the representation problem to the extraction problem for the
underlying prime field.
These results imply the inexistence of field-homomorphic (as opposed
to only group-homomorphic, like RSA) one-way permutations for fields
of small characteristic.

2007

EPRINT

Domain Extension of Public Random Functions: Beyond the Birthday Barrier
Abstract

A public random function is a random function that is accessible by
all parties, including the adversary. For example, a (public) random
oracle is a public random function $\{0,1\}^{*} \to \{0,1\}^n$. The
natural problem of constructing a public random oracle from a public
random function $\{0,1\}^{m} \to \{0,1\}^n$ (for some $m > n$) was
first considered at Crypto 2005 by Coron et al.\ who proved the
security of variants of the Merkle-Damg{\aa}rd construction against
adversaries issuing up to $O(2^{n/2})$ queries to the construction and
to the underlying compression function. This bound is less than the
square root of $n2^m$, the number of random bits contained in the
underlying random function.
In this paper, we investigate domain extenders for public random
functions approaching optimal security. In particular, for all
$\epsilon \in (0,1)$ and all functions $m$ and $\ell$ (polynomial in
$n$), we provide a construction $\mathbf{C}_{\epsilon,m,\ell}(\cdot)$
which extends a public random function $\mathbf{R}: \{0,1\}^{n} \to
\{0,1\}^n$ to a function $\mathbf{C}_{\epsilon,m,\ell}(\R):
\{0,1\}^{m(n)} \to \{0,1\}^{\ell(n)}$ with time-complexity polynomial
in $n$ and $1/\epsilon$ and which is secure against adversaries which
make up to $\Theta(2^{n(1-\epsilon)})$ queries. A central tool for
achieving high security are special classes of unbalanced bipartite
expander graphs with small degree. The achievability of practical (as
opposed to complexity-theoretic) efficiency is proved by a
non-constructive existence proof.
Combined with the iterated constructions of Coron et al., our result
leads to the first iterated construction of a hash
function $\{0,1\}^{*} \to \{0,1\}^n$ from a component
function $\{0,1\}^{n} \to \{0,1\}^n$ that withstands all recently
proposed generic attacks against iterated hash functions, like Joux's
multi-collision attack, Kelsey and Schneier's second-preimage attack,
and Kelsey and Kohno's herding attacks.

2006

EPRINT

A Fast and Key-Efficient Reduction of Chosen- Ciphertext to Known-Plaintext Security
Abstract

Motivated by the quest for reducing assumptions in security proofs in
cryptography, this paper is concerned with designing efficient
symmetric encryption and authentication schemes based on any weak pseudorandom function (PRF) which can be much more efficiently
implemented than PRFs. Damgard and Nielsen (CRYPTO '02) have
shown how to construct an efficient symmetric encryption scheme based
on any weak PRF that is provably secure against chosen-plaintext
attacks. The main ingredient is a range-extension construction for
weak PRFs. By using well-known techniques, they also showed how their
scheme can be made secure against the stronger chosen-ciphertext attacks.
The results of our paper are three-fold. First, we give a range-extension construction for weak PRFs that is optimal within a
large and natural class of reductions (especially all known today).
Second, we propose a construction of a regular PRF from any weak PRF.
Third, these two results imply a (for long messages) much more
efficient chosen-ciphertext secure encryption scheme than the one
proposed by Damgard and Nielsen. The results also give answers to
open questions posed by Naor and Reingold (CRYPTO '98) and by
Damgard and Nielsen.

2006

EPRINT

Luby-Rackoff Ciphers from Weak Round Functions?
Abstract

The Feistel-network is a popular structure underlying many block-ciphers where the cipher is constructed from many simpler rounds, each defined by some function which is derived from the secret key.
Luby and Rackoff showed that the three-round Feistel-network -- each round instantiated with a pseudorandom function secure against adaptive chosen plaintext attacks (CPA) -- is a CPA secure pseudorandom permutation, thus giving some confidence in the soundness of using a Feistel-network to design block-ciphers.
But the round functions used in actual block-ciphers are -- for efficiency reasons -- far from being pseudorandom. We investigate the security of the Feistel-network against CPA distinguishers when the only security guarantee we have for the round functions is that they are secure against non-adaptive chosen plaintext attacks (NCPA). We show that in the information-theoretic setting, four rounds with NCPA secure round functions are sufficient (and necessary) to get a CPA secure permutation. Unfortunately, this result does not translate into the more interesting pseudorandom setting. In fact, under the so-called Inverse Decisional Diffie-Hellman assumption the Feistel-network with four rounds, each instantiated with a NCPA secure pseudorandom function, is in general not a CPA secure pseudorandom permutation.
We also consider other relaxations of the Luby-Rackoff construction and prove their (in)security against different classes of attacks.

2006

EPRINT

Indistinguishability Amplification
Abstract

A random system is the abstraction of the input-output behavior of
any kind of discrete system, in particular cryptographic systems. Many
aspects of cryptographic security analyses and proofs can be seen as
the proof that a certain random system (e.g. a block cipher) is
indistinguishable from an ideal system (e.g. a random permutation),
for different types of distinguishers.
This paper presents a new generic approach to proving upper
bounds on the distinguishing advantage of a combined system,
assuming upper bounds of various types on the component systems. For
a general type of combination operation of systems (including the
combination of functions or the cascade of permutations), we prove
two amplification theorems.
The first is a direct-product theorem, similar in spirit to the
XOR-Lemma: The distinguishing advantage (or security) of the
combination of two (possibly stateful) systems is twice the product
of the individual distinguishing advantages, which is optimal.
The second theorem states that the combination of systems is
secure against some strong class of distinguishers, assuming
only that the components are secure against some weaker
class of attacks. As a corollary we obtain tight bounds on the
adaptive security of the cascade and parallel composition of
non-adaptively (or only random-query) secure component systems.
A key technical tool of the paper is
to show a tight two-way correspondence, previously only
known to hold in one direction, between the distinguishing
advantage of two systems and the probability of provoking an
appropriately defined event on one of the systems.

2005

PKC

2003

EPRINT

Indifferentiability, Impossibility Results on Reductions, and Applications to the Random Oracle Methodology
Abstract

The goals of this paper are three-fold. First we introduce and
motivate a generalization of the fundamental concept of the
indistinguishability of two systems, called indifferentiability.
This immediately leads to a generalization of the related notion of
reducibility of one system to another.
Second, we prove that indifferentiability is the necessary and
sufficient condition on two systems S and T such that the security
of any cryptosystem using T as a component is not affected when T is
substituted by S. In contrast to indistinguishability,
indifferentiability is applicable in settings where a possible
adversary is assumed to have access to additional information about
the internal state of the involved systems, for instance the public
parameter selecting a member from a family of hash functions.
Third, we state an easily verifiable criterion for a system U not to
be reducible (according to our generalized definition) to another
system V and, as an application, prove that a random oracle is not
reducible to a weaker primitive, called asynchronous beacon, and
also that an asynchronous beacon is not reducible to a finite-length
random string. Each of these irreducibility results alone implies
the main theorem of Canetti, Goldreich and Halevi stating that there
exist cryptosystems that are secure in the random oracle model but
for which replacing the random oracle by any implementation leads to
an insecure cryptosystem.

2002

EUROCRYPT

2001

EPRINT

Robustness for Free in Unconditional Multi-Party Computation
Abstract

We present a very efficient multi-party computation protocol
unconditionally secure against an active adversary. The security is
maximal, i.e., active corruption of up to $t<n/3$ of the $n$ players is
tolerated. The communication complexity for securely evaluating a circuit
with $m$ multiplication gates over a finite field is $\O(mn^2)$ field
elements, including the communication required for simulating broadcast.
This corresponds to the complexity of the best known protocols for the
passive model, where the corrupted players are guaranteed not to deviate
from the protocol. Even in this model, it seems to be unavoidable that
for every multiplication gate every player must send a value to every
other player, and hence the complexity of our protocol may well be
optimal. The constant overhead factor for robustness is small and the
protocol is practical.

2000

EPRINT

General Secure Multi-Party Computation from any Linear Secret Sharing Scheme
Abstract

We show that verifiable secret sharing (VSS) and secure multi-party
computation (MPC) among a set of $n$ players can efficiently be based
on {\em any} linear secret sharing scheme (LSSS) for the players,
provided that the access structure of the LSSS allows MPC or VSS at
all. Because an LSSS neither guarantees reconstructability when some
shares are false, nor verifiability of a shared value, nor allows for
the multiplication of shared values, an LSSS is an apparently much weaker
primitive than VSS or MPC.
Our approach to secure MPC is generic and applies to both the
in\-for\-ma\-tion-theoretic and the cryptographic setting. The
construction is based on 1) a formalization of the special
multiplicative property of an LSSS that is needed to perform a
multiplication on shared values, 2) an efficient generic construction
to obtain from any LSSS a multiplicative LSSS for the same access
structure, and 3) an efficient generic construction to build
verifiability into every LSSS (always assuming that the adversary
structure allows for MPC or VSS at all).
The protocols are efficient. In contrast to all previous
information-theo\-re\-ti\-cal\-ly secure protocols, the field size is not
restricted (e.g, to be greater than $n$). Moreover, we exhibit
adversary structures for which our protocols are polynomial in $n$
while all previous approaches to MPC for non-threshold adversaries
provably have super-polynomial complexity.

1998

CRYPTO

1997

EUROCRYPT

1994

CRYPTO

1992

EUROCRYPT

#### Program Committees

- TCC 2008
- PKC 2003
- Asiacrypt 2000
- Asiacrypt 1999
- Crypto 1998
- Eurocrypt 1996 (Program chair)
- Eurocrypt 1995
- Eurocrypt 1994
- Crypto 1992
- Crypto 1991

#### Coauthors

- Divesh Aggarwal (4)
- Joël Alwen (3)
- Christian Badertscher (8)
- Fabio Banfi (1)
- Endre Bangerter (1)
- Zuzana Beerliová-Trubíniová (2)
- Daniel Bleichenbacher (2)
- Christian Cachin (3)
- Jan Camenisch (1)
- Jeffrey Considine (1)
- Sandro Coretti (4)
- Ronald Cramer (2)
- Ivan Damgård (2)
- Gregory Demay (2)
- Grégory Demay (1)
- Yevgeniy Dodis (1)
- Stefan Dziembowski (2)
- Serge Fehr (1)
- Matthias Fitzi (8)
- Matthew K. Franklin (1)
- Juan A. Garay (3)
- Peter Gaži (5)
- Nicolas Gisin (1)
- Sarah Hauser (1)
- Martin Hirt (16)
- Dennis Hofheinz (2)
- Clemens Holenstein (2)
- Thomas Holenstein (1)
- Daniel Jost (5)
- Jonathan Katz (2)
- Reto Kohlas (1)
- Markulf Kohlweiss (1)
- Kenji Koyama (1)
- David Lanzenberger (2)
- Rio LaVigne (2)
- Leonid A. Levin (1)
- Chen-Da Liu-Zhang (5)
- Julian Loss (1)
- Christoph Lucas (2)
- James L. Massey (4)
- Christian Matt (5)
- David Metcalf (1)
- Tal Moran (3)
- Marta Mularczyk (4)
- Tatsuaki Okamoto (1)
- Cristina Onete (1)
- Rafail Ostrovsky (2)
- Yvonne Anne Oswald (2)
- Arpita Patra (2)
- Krzysztof Pietrzak (6)
- Christopher Portmann (3)
- Bartosz Przydatek (1)
- Dominik Raub (3)
- Pavel Raykov (3)
- Renato Renner (5)
- João Ribeiro (1)
- Guilherme Rito (2)
- Phillip Rogaway (2)
- Oliver von Rotz (1)
- Andreas Rüedlinger (1)
- Johan Sjödin (5)
- Björn Tackmann (12)
- Stefano Tessaro (5)
- Daniel Tschudi (6)
- Scott A. Vanstone (1)
- Daniele Venturi (4)
- Muzhong Wang (1)
- Stefan Wolf (5)
- Yacov Yacobi (2)
- Jiamin Zhu (1)
- Vassilis Zikas (9)