CryptoDB
Mingyuan Wang
Publications
Year
Venue
Title
2024
CRYPTO
Threshold Encryption with Silent Setup
Abstract
We build a concretely efficient threshold encryption scheme where the joint public key of a set of parties is computed as a \emph{deterministic} function of their locally computed public keys, enabling a \emph{silent} setup phase. By eliminating interaction from the setup phase, our scheme immediately enjoys several highly desirable features such as asynchronous setup, multiverse support, and dynamic threshold.
Prior to our work, the only known constructions of threshold encryption with silent setup relied on heavy cryptographic machinery such as indistinguishability Obfuscation or witness encryption for all of $\mathsf{NP}$. Our core technical innovation lies in building a special purpose witness encryption scheme for the statement ``at least $t$ parties have signed a given message''. Our construction relies on pairings and is proved secure in the Generic Group Model.
Notably, our construction, restricted to the special case of threshold $\thres=1$, gives an alternative construction of the (flexible) distributed broadcast encryption from pairings, which has been the central focus of several recent works.
We implement and evaluate our scheme to demonstrate its concrete efficiency. Both encryption and partial decryption are constant time, taking $<7\,$ms and $<1\,$ms, respectively. For a committee of $1024$ parties, the aggregation of partial decryptions takes $<200\,$ms, when all parties provide partial decryptions. The size of each ciphertext is $\approx 8\times$ larger than an ElGamal ciphertext.
2024
CRYPTO
Scalable Multiparty Computation from Non-linear Secret Sharing
Abstract
A long line of work has investigated the design of scalable secure multiparty computation (MPC) protocols with computational and communication complexity independent of the number of parties (beyond any dependence on the circuit size). We present the first unconditionally-secure MPC protocols for arithmetic circuits over *large fields* with total computation $\bigO{|C|\log|F|}$, where $|C|$ and $F|$ denote the circuit and field size, respectively.
Prior work could either achieve similar complexity only in *communication*, or required highly structured circuits, or expensive circuit transformations. To obtain our results, we depart from the prior approach of share packing in linear secret-sharing schemes; instead, we use an ``unpacking'' approach via *non-linear* secret sharing.
2024
CRYPTO
How to Prove Statements Obliviously?
Abstract
Cryptographic applications often require proving statements about hidden secrets satisfying certain circuit relations. Moreover, these proofs must often be generated obliviously, i.e., without knowledge of the secret. This work presents a new technique called --- FRI on hidden values --- for efficiently proving such statements. This technique enables a polynomial commitment scheme for values hidden inside linearly homomorphic primitives, such as linearly homomorphic encryption, linearly homomorphic commitment, group exponentiation, fully homomorphic encryption, etc. Building on this technique, we obtain the following results.
1. An efficient SNARK for proving the honest evaluation of FHE ciphertexts. This allows for an efficiently verifiable private delegation of computation, where the client only needs to perform logarithmic many FHE computations to verify the correctness of the computation.
2. An efficient approach for privately delegating the computation of zkSNARKs to a single untrusted server, without requiring the server to make any non-black-box use of cryptography. All prior works require multiple servers and the assumption that some subset of the servers are honest.
3. A weighted threshold signature scheme that does not require any setup. In particular, parties may sample their own keys independently, and no distributed key generation (DKG) protocol is needed. Furthermore, the efficiency of our scheme is completely independent of the weights.
Prior to this work, there were no known black-box feasibility results for any of these applications.
We also investigate the use of this approach in the context of public proof aggregation. These are only a few representative applications that we explore in this paper. We expect our techniques to be widely applicable in many other scenarios.
2024
TCC
On the Black-Box Complexity of Private-Key Inner-Product Functional Encryption
Abstract
We initiate the study of the black-box complexity of private- key functional encryption (FE). Of central importance in the private-key setting is the inner-product functionality, which is currently only known from assumptions that imply public-key encryption, such as Decisional Diffie-Hellman or Learning-with-Errors. As our main result, we rule out black-box constructions of private-key inner-product FE from random oracles. This implies a black-box separation between private-key inner- product FE from all symmetric-key primitives implied by random oracles (e.g., symmetric-key encryption, collision-resistant hash functions). Proving lower bounds for private-key functional encryption schemes introduces challenges that were absent in prior works. In particular, the combinatorial techniques developed by prior works for proving black-box lower bounds are only useful in the public-key setting and predicate encryption settings, which all fail for the private-key FE case. Our work develops novel combinatorial techniques based on Fourier analysis to overcome these barriers. We expect these techniques to be widely useful in future research in this area.
2023
CRYPTO
Cryptography with Weights: MPC, Encryption and Signatures
Abstract
The security of many powerful cryptographic systems such as secure multiparty computation, threshold encryption, and threshold signatures rests on trust assumptions about the parties. The de-facto model treats all parties equally and requires that a certain fraction of the parties are honest. While this paradigm of one-person-one-vote has been very successful over the years, current and emerging practical use cases suggest that it is outdated.
In this work, we consider {\em weighted} cryptosystems where every party is assigned a certain weight and the trust assumption is that a certain fraction of the total weight is honest. This setting can be translated to the standard setting (where each party has a unit weight) via virtualization. However, this method is quite expensive, incurring a multiplicative overhead in the weight.
We present new weighted cryptosystems with significantly better efficiency: our proposed schemes incur only an {\em additive} overhead in weights.
\begin{itemize}
\item We first present a weighted ramp secret-sharing scheme (WRSS) where the size of a secret share is $O(w)$ (where $w$ corresponds to the weight). In comparison, Shamir's secret sharing with virtualization requires secret shares of size $w\cdot\lambda$, where $\lambda=\log |\bbF|$ is the security parameter.
\item Next, we use our WRSS to construct weighted versions of (semi-honest) secure multiparty computation (MPC), threshold encryption, and threshold signatures. All these schemes inherit the efficiency of our WRSS and incur only an additive overhead in weights.
\end{itemize}
Our WRSS is based on the Chinese remainder theorem-based secret-sharing scheme. Interestingly, this secret-sharing scheme is {\em non-linear} and only achieves statistical privacy. These distinct features introduce several technical hurdles in applications to MPC and threshold cryptosystems. We resolve these challenges by developing several new ideas.
2023
CRYPTO
Reusable Secure Computation in the Plain Model
Abstract
Consider the standard setting of two-party computation where the sender has a secret function $f$ and the receiver has a secret input $x$ and the output $f(x)$ is delivered to the receiver at the end of the protocol. Let us consider the unidirectional message model where only one party speaks in each round. In this setting, Katz and Ostrovsky (Crypto 2004) showed that at least four rounds of interaction between the parties are needed in the plain model (i.e., no trusted setup) if the simulator uses the adversary in a black-box way (a.k.a. black-box simulation). Suppose the sender and the receiver would like to run multiple sequential iterations of the secure computation protocol on possibly different inputs. For each of these iterations, do the parties need to start the protocol from scratch and exchange four messages?
In this work, we explore the possibility of \textit{amortizing} the round complexity or in other words, \textit{reusing} a certain number of rounds of the secure computation protocol in the plain model. We obtain the following results.
\begin{itemize}
\item Under standard cryptographic assumptions, we construct a four-round two-party computation protocol where (i) the first three rounds of the protocol could be reused an unbounded number of times if the receiver input remains the same and only the sender input changes, and (ii) the first two rounds of the protocol could be reused an unbounded number of times if the receiver input needs to change as well. In other words, the sender sends a single additional message if only its input changes, and in the other case, we need one message each from the receiver and the sender. The number of additional messages needed in each of the above two modes is optimal and, additionally, our protocol allows arbitrary interleaving of these two modes.
\item We also extend these results to the multiparty setting (in the simultaneous message exchange model) and give round-optimal protocols such that (i) the first two rounds could be reused an unbounded number of times if the inputs of the parties need to change and (ii) the first three rounds could be reused an unbounded number of times if the inputs remain the same but the functionality to be computed changes. As in the two-party setting, we allow arbitrary interleaving of the above two modes of operation.
\end{itemize}
2022
TCC
IBE with Incompressible Master Secret and Small Identity Secrets
Abstract
Side-stepping the protection provided by cryptography, exfiltration attacks are becoming a considerable real-world threat. With the goal of mitigating the exfiltration of cryptographic keys, big-key cryptosystems have been developed over the past few years. These systems come with very large secret keys which are thus hard to exfiltrate. Typically, in such systems, the setup time must be large as it generates the large secret key. However, subsequently, the encryption and decryption operations, that must be performed repeatedly, are required to be efficient. Specifically, the encryption uses only a small public key and the decryption only accesses small ciphertext-dependent parts of the full secret key. Nonetheless, these schemes require decryption to have access to the entire secret key. Thus, using such big-key cryptosystems necessitate that users carry around large secret keys on their devices, which can be a hassle and in some cases might also render exfiltration easy.
With the goal of removing this problem, in this work, we initiate the study of big-key identity-based encryption (bk-IBE). In such a system, the master secret key is allowed to be large but we require that the identity-based secret keys are short. This allows users to use the identity-based short keys as the ephemeral secret keys that can be more easily carried around and allow for decrypting ciphertexts matching a particular identity, e.g. messages that were encrypted on a particular date. In particular:
-We build a new definitional framework for bk-IBE capturing a range of applications. In the case when the exfiltration is small our definition promises stronger security --- namely, an adversary can break semantic security for only a few identities, proportional to the amount of leakage it gets. In contrast, in the catastrophic case where a large fraction of the master secret key has been ex-filtrated, we can still resort to a guarantee that the ciphertexts generated for a randomly chosen identity (or, an identity with enough entropy) remain protected. We demonstrate how this framework captures the best possible security guarantees.
-We show the first construction of such a bk-IBE offering strong security properties. Our construction is based on standard assumptions on groups with bilinear pairings and brings together techniques from seemingly different contexts such as leakage resilient cryptography, reusable two-round MPC, and laconic oblivious transfer. We expect our techniques to be of independent interest.
2022
TCC
Leakage-resilient Linear Secret-sharing against arbitrary Bounded-size Leakage Family
Abstract
Motivated by leakage-resilient secure computation of circuits with addition and multiplication gates, this work studies the leakage-resilience of linear secret-sharing schemes with a small reconstruction threshold against any {\em bounded-size} family of joint leakage attacks, \ie, the leakage function can leak {\em global} information from all secret shares.
We first prove that, with high probability, the Massey secret-sharing scheme corresponding to a random linear code over a finite field $F$ is leakage-resilient against any $\ell$-bit joint leakage family of size at most $\abs{F}^{k-2.01}/8^\ell $, where $k$ is the reconstruction threshold. Our result (1) bypasses the bottleneck due to the existing Fourier-analytic approach, (2) enables secure multiplication of secrets, and (3) is near-optimal. We use combinatorial and second-moment techniques to prove the result.
Next, we show that the Shamir secret-sharing scheme over a prime-order field $F$ with randomly chosen evaluation places and with threshold $k$ is leakage-resilient to any $\ell$-bit joint leakage family of size at most $\abs{F}^{2k-n-2.01}/(k!\cdot 8^\ell)$ with high probability. We prove this result by marrying our proof techniques for the first result with the existing Fourier analytical approach. Moreover, it is unlikely that one can extend this result beyond $k/n\leq0.5$ due to the technical hurdle of the Fourier-analytic approach.
2021
CRYPTO
Computational Hardness of Optimal Fair Computation: Beyond Minicrypt
📺
Abstract
Secure multi-party computation allows mutually distrusting parties to compute securely over their private data. However, guaranteeing output delivery to honest parties when the adversarial parties may abort the protocol has been a challenging objective. As a representative task, this work considers two-party coin-tossing protocols with guaranteed output delivery, a.k.a., fair coin-tossing.
In the information-theoretic plain model, as in two-party zero-sum games, one of the parties can force an output with certainty. In the commitment-hybrid, any $r$-message coin-tossing protocol is ${1/\sqrt r}$-unfair, i.e., the adversary can change the honest party's output distribution by $1/\sqrt r$ in the statistical distance. Moran, Naor, and Segev (TCC--2009) constructed the first $1/r$-unfair protocol in the oblivious transfer-hybrid. No further security improvement is possible because Cleve (STOC--1986) proved that $1/r$-unfairness is unavoidable. Therefore, Moran, Naor, and Segev's coin-tossing protocol is optimal. However, is oblivious transfer necessary for optimal fair coin-tossing?
Maji and Wang (CRYPTO--2020) proved that any coin-tossing protocol using one-way functions in a black-box manner is at least $1/\sqrt r$-unfair. That is, optimal fair coin-tossing is impossible in Minicrypt. Our work focuses on tightly characterizing the hardness of computation assumption necessary and sufficient for optimal fair coin-tossing within Cryptomania, outside Minicrypt. Haitner, Makriyannia, Nissim, Omri, Shaltiel, and Silbak (FOCS--2018 and TCC--2018) proved that better than $1/\sqrt r$-unfairness, for any constant $r$, implies the existence of a key-agreement protocol.
We prove that any coin-tossing protocol using public-key encryption (or, multi-round key agreement protocols) in a black-box manner must be $1/\sqrt r$-unfair. Next, our work entirely characterizes the additional power of secure function evaluation functionalities for optimal fair coin-tossing. We augment the model with an idealized secure function evaluation of $f$, \aka, the $f$-hybrid. If $f$ is complete, that is, oblivious transfer is possible in the $f$-hybrid, then optimal fair coin-tossing is also possible in the $f$-hybrid. On the other hand, if $f$ is not complete, then a coin-tossing protocol using public-key encryption in a black-box manner in the $f$-hybrid is at least $1/\sqrt r$-unfair.
2021
EUROCRYPT
Leakage-resilience of the Shamir Secret-sharing Scheme against Physical-bit Leakages
📺
Abstract
Efficient Reed-Solomon code reconstruction algorithms, for example, by Guruswami and Wooters (STOC--2016), translate into local leakage attacks on Shamir secret-sharing schemes over characteristic-2 fields. However, Benhamouda, Degwekar, Ishai, and Rabin (CRYPTO--2018) showed that the Shamir secret sharing scheme over prime-fields is leakage resilient to one-bit local leakage if the reconstruction threshold is roughly 0.87 times the total number of parties. In several application scenarios, like secure multi-party multiplication, the reconstruction threshold must be at most half the number of parties. Furthermore, the number of leakage bits that the Shamir secret sharing scheme is resilient to is also unclear.
Towards this objective, we study the Shamir secret-sharing scheme's leakage-resilience over a prime-field $F$. The parties' secret-shares, which are elements in the finite field $F$, are naturally represented as $\lambda$-bit binary strings representing the elements $\{0,1,\dotsc,p-1\}$. In our leakage model, the adversary can independently probe $m$ bit-locations from each secret share. The inspiration for considering this leakage model stems from the impact that the study of oblivious transfer combiners had on general correlation extraction algorithms, and the significant influence of protecting circuits from probing attacks has on leakage-resilient secure computation.
Consider arbitrary reconstruction threshold $k\geq 2$, physical bit-leakage parameter $m\geq 1$, and the number of parties $n\geq 1$. We prove that Shamir's secret-sharing scheme with random evaluation places is leakage-resilient with high probability when the order of the field $F$ is sufficiently large; ignoring polylogarithmic factors, one needs to ensure that $\log \abs F \geq n/k$. Our result, excluding polylogarithmic factors, states that Shamir's scheme is secure as long as the total amount of leakage $m\cdot n$ is less than the entropy $k\cdot\lambda$ introduced by the Shamir secret-sharing scheme. Note that our result holds even for small constant values of the reconstruction threshold $k$, which is essential to several application scenarios.
To complement this positive result, we present a physical-bit leakage attack for $m=1$ physical bit-leakage from $n=k$ secret shares and any prime-field $F$ satisfying $\abs F=1\mod k$. In particular, there are (roughly) $\abs F^{n-k+1}$ such vulnerable choices for the $n$-tuple of evaluation places. We lower-bound the advantage of this attack for small values of the reconstruction threshold, like $k=2$ and $k=3$, and any $\abs F=1\mod k$. In general, we present a formula calculating our attack's advantage for every $k$ as $\abs F\rightarrow\infty.$
Technically, our positive result relies on Fourier analysis, analytic properties of proper rank-$r$ generalized arithmetic progressions, and B\'ezout's theorem to bound the number of solutions to an equation over finite fields. The analysis of our attack relies on determining the ``discrepancy'' of the Irwin-Hall distribution. A probability distribution's discrepancy is a new property of distributions that our work introduces, which is of potential independent interest.
2021
CRYPTO
Constructing Locally Leakage-resilient Linear Secret-sharing Schemes
📺
Abstract
Innovative side-channel attacks have repeatedly falsified the assumption that cryptographic implementations are opaque black-boxes. Therefore, it is essential to ensure cryptographic constructions' security even when information leaks via unforeseen avenues. One such fundamental cryptographic primitive is the secret-sharing schemes, which underlies nearly all threshold cryptography. Our understanding of the leakage-resilience of secret-sharing schemes is still in its preliminary stage.
This work studies locally leakage-resilient linear secret-sharing schemes. An adversary can leak $m$ bits of arbitrary local leakage from each $n$ secret shares. However, in a locally leakage-resilient secret-sharing scheme, the leakage's joint distribution reveals no additional information about the secret.
For every constant $m$, we prove that the Massey secret-sharing scheme corresponding to a random linear code of dimension $k$ (over sufficiently large prime fields) is locally leakage-resilient, where $k/n > 1/2$ is a constant. The previous best construction by Benhamouda, Degwekar, Ishai, Rabin (CRYPTO--2018) needed $k/n > 0.907$. A technical challenge arises because the number of all possible $m$-bit local leakage functions is exponentially larger than the number of random linear codes. Our technical innovation begins with identifying an appropriate pseudorandomness-inspired family of tests; passing them suffices to ensure leakage-resilience. We show that most linear codes pass all tests in this family. This Monte-Carlo construction of linear secret-sharing scheme that is locally leakage-resilient has applications to leakage-resilient secure computation.
Furthermore, we highlight a crucial bottleneck for all the analytical approaches in this line of work. Benhamouda et al. introduced an analytical proxy to study the leakage-resilience of secret-sharing schemes; if the proxy is small, then the scheme is leakage-resilient. However, we present a one-bit local leakage function demonstrating that the converse is false, motivating the need for new analytically well-behaved functions that capture leakage-resilience more accurately.
Technically, the analysis involves probabilistic and combinatorial techniques and (discrete) Fourier analysis. The family of new ``tests'' capturing local leakage functions, we believe, is of independent and broader interest.
2020
CRYPTO
Black-box use of One-way Functions is Useless for Optimal Fair Coin-Tossing
📺
Abstract
A two-party fair coin-tossing protocol guarantees output delivery to the honest party even when the other party aborts during the protocol execution. Cleve (STOC--1986) demonstrated that a computationally bounded fail-stop adversary could alter the output distribution of the honest party by (roughly) $1/r$ (in the statistical distance) in an $r$-message coin-tossing protocol. An optimal fair coin-tossing protocol ensures that no adversary can alter the output distribution beyond $1/r$.
In a seminal result, Moran, Naor, and Segev (TCC--2009) constructed the first optimal fair coin-tossing protocol using (unfair) oblivious transfer protocols. Whether the existence of oblivious transfer protocols is a necessary hardness of computation assumption for optimal fair coin-tossing remains among the most fundamental open problems in theoretical cryptography. The results of Impagliazzo and Luby (FOCS–1989) and Cleve and Impagliazzo (1993) prove that optimal fair coin-tossing implies the necessity of one-way functions' existence; a significantly weaker hardness of computation assumption compared to the existence of secure oblivious transfer protocols. However, the sufficiency of the existence of one-way functions is not known.
Towards this research endeavor, our work proves a black-box separation of optimal fair coin-tossing from the existence of one-way functions. That is, the black-box use of one-way functions cannot enable optimal fair coin-tossing. Following the standard Impagliazzo and Rudich (STOC--1989) approach of proving black-box separations, our work considers any $r$-message fair coin-tossing protocol in the random oracle model where the parties have unbounded computational power. We demonstrate a fail-stop attack strategy for one of the parties to alter the honest party's output distribution by $1/\sqrt r$ by making polynomially-many additional queries to the random oracle. As a consequence, our result proves that the $r$-message coin-tossing protocol of Blum (COMPCON--1982) and Cleve (STOC--1986), which uses one-way functions in a black-box manner, is the best possible protocol because an adversary cannot change the honest party's output distribution by more than $1/\sqrt r$.
Several previous works, for example, Dachman--Soled, Lindell, Mahmoody, and Malkin (TCC--2011), Haitner, Omri, and Zarosim (TCC--2013), and Dachman--Soled, Mahmoody, and Malkin (TCC--2014), made partial progress on proving this black-box separation assuming some restrictions on the coin-tossing protocol. Our work diverges significantly from these previous approaches to prove this black-box separation in its full generality. The starting point is the recently introduced potential-based inductive proof techniques for demonstrating large gaps in martingales in the information-theoretic plain model. Our technical contribution lies in identifying a global invariant of communication protocols in the random oracle model that enables the extension of this technique to the random oracle model.
2019
CRYPTO
Explicit Rate-1 Non-malleable Codes for Local Tampering
📺
Abstract
This paper constructs high-rate non-malleable codes in the information-theoretic plain model against tampering functions with bounded locality. We consider $$\delta $$-local tampering functions; namely, each output bit of the tampering function is a function of (at most) $$\delta $$ input bits. This work presents the first explicit and efficient rate-1 non-malleable code for $$\delta $$-local tampering functions, where $$\delta =\xi \lg n$$ and $$\xi <1$$ is any positive constant. As a corollary, we construct the first explicit rate-1 non-malleable code against NC$$^0$$ tampering functions.Before our work, no explicit construction for a constant-rate non-malleable code was known even for the simplest 1-local tampering functions. Ball et al. (EUROCRYPT–2016), and Chattopadhyay and Li (STOC–2017) provided the first explicit non-malleable codes against $$\delta $$-local tampering functions. However, these constructions are rate-0 even when the tampering functions have 1-locality. In the CRS model, Faust et al. (EUROCRYPT–2014) constructed efficient rate-1 non-malleable codes for $$\delta = O(\log n)$$ local tampering functions.Our main result is a general compiler that bootstraps a rate-0 non-malleable code against leaky input and output local tampering functions to construct a rate-1 non-malleable code against $$\xi \lg n$$-local tampering functions, for any positive constant $$\xi < 1$$. Our explicit construction instantiates this compiler using an appropriate encoding by Ball et al. (EUROCRYPT–2016).
Program Committees
- TCC 2023
Coauthors
- Nico Döttling (1)
- Sanjam Garg (5)
- Aarushi Goel (1)
- Vipul Goyal (1)
- Divya Gupta (1)
- Mohammad Hajiabadi (1)
- Abhishek Jain (2)
- Dimitris Kolonelos (1)
- Roman Langrehr (1)
- Hemanta K. Maji (6)
- Pratyay Mukherjee (2)
- Hai H. Nguyen (2)
- Adam O'Neill (1)
- Anat Paskin-Cherniavsky (3)
- Guru Vamsi Policharla (1)
- Sruthi Sekar (1)
- Rohit Sinha (1)
- Akshayaram Srinivasan (1)
- Tom Suad (3)
- Mingyuan Wang (13)
- Xiuyu Ye (1)
- Albert Yu (1)
- Yinuo Zhang (1)