International Association for Cryptologic Research

International Association
for Cryptologic Research

CryptoDB

Kai-Min Chung

Affiliation: Academia Sinica

Publications

Year
Venue
Title
2021
EUROCRYPT
On the Compressed-Oracle Technique, and Post-Quantum Security of Proofs of Sequential Work
We revisit the so-called compressed oracle technique, introduced by Zhandry for analyzing quantum algorithms in the quantum random oracle model (QROM). To start off with, we offer a concise exposition of the technique, which easily extends to the parallel-query QROM, where in each query-round the considered algorithm may make several queries to the QROM in parallel. This variant of the QROM allows for a more fine-grained query-complexity analysis. Our main technical contribution is a framework that simplifies the use of (the parallel-query generalization of) the compressed oracle technique for proving query complexity results. With our framework in place, whenever applicable, it is possible to prove quantum query complexity lower bounds by means of purely classical reasoning. More than that, for typical examples the crucial classical observations that give rise to the classical bounds are sufficient to conclude the corresponding quantum bounds. We demonstrate this on a few examples, recovering known results but also obtaining new results. Our main target is the hardness of finding a q-chain with fewer than q parallel queries, i.e., a sequence x_0, x_1, ..., x_q with x_i = H(x_{i-1}) for all 1 \leq i \leq q. The above problem of finding a hash chain is of fundamental importance in the context of proofs of sequential work. Indeed, as a concrete cryptographic application of our techniques, we prove quantum security of the ``Simple Proofs of Sequential Work'' by Cohen and Pietrzak.
2020
TCC
Classical Verification of Quantum Computations with Efficient Verifier 📺
In this paper, we extend the protocol of classical verification of quantum computations (CVQC) recently proposed by Mahadev to make the verification efficient. Our result is obtained in the following three steps: \begin{itemize} \item We show that parallel repetition of Mahadev's protocol has negligible soundness error. This gives the first constant round CVQC protocol with negligible soundness error. In this part, we only assume the quantum hardness of the learning with error (LWE) problem similar to Mahadev's work. \item We construct a two-round CVQC protocol in the quantum random oracle model (QROM) where a cryptographic hash function is idealized to be a random function. This is obtained by applying the Fiat-Shamir transform to the parallel repetition version of Mahadev's protocol. \item We construct a two-round CVQC protocol with an efficient verifier in the CRS+QRO model where both prover and verifier can access a (classical) common reference string generated by a trusted third party in addition to quantum access to QRO. Specifically, the verifier can verify a $\mathsf{QTIME}(T)$ computation in time $\mathsf{poly}(\lambda,\log T)$ where $\lambda$ is the security parameter. For proving soundness, we assume that a standard model instantiation of our two-round protocol with a concrete hash function (say, SHA-3) is sound and the existence of post-quantum indistinguishability obfuscation and post-quantum fully homomorphic encryption in addition to the quantum hardness of the LWE problem. \end{itemize}
2019
EUROCRYPT
A Quantum-Proof Non-malleable Extractor 📺
In privacy amplification, two mutually trusted parties aim to amplify the secrecy of an initial shared secret X in order to establish a shared private key K by exchanging messages over an insecure communication channel. If the channel is authenticated the task can be solved in a single round of communication using a strong randomness extractor; choosing a quantum-proof extractor allows one to establish security against quantum adversaries.In the case that the channel is not authenticated, this simple solution is no longer secure. Nevertheless, Dodis and Wichs (STOC’09) showed that the problem can be solved in two rounds of communication using a non-malleable extractor, a stronger pseudo-random construction than a strong extractor.We give the first construction of a non-malleable extractor that is secure against quantum adversaries. The extractor is based on a construction by Li (FOCS’12), and is able to extract from source of min-entropy rates larger than 1 / 2. Combining this construction with a quantum-proof variant of the reduction of Dodis and Wichs, due to Cohen and Vidick (unpublished) we obtain the first privacy amplification protocol secure against active quantum adversaries.
2019
EUROCRYPT
On Quantum Advantage in Information Theoretic Single-Server PIR 📺
In (single-server) Private Information Retrieval (PIR), a server holds a large database $${\mathtt {DB}}$$ of size n, and a client holds an index $$i \in [n]$$ and wishes to retrieve $${\mathtt {DB}}[i]$$ without revealing i to the server. It is well known that information theoretic privacy even against an “honest but curious” server requires $$\varOmega (n)$$ communication complexity. This is true even if quantum communication is allowed and is due to the ability of such an adversarial server to execute the protocol on a superposition of databases instead of on a specific database (“input purification attack”).Nevertheless, there have been some proposals of protocols that achieve sub-linear communication and appear to provide some notion of privacy. Most notably, a protocol due to Le Gall (ToC 2012) with communication complexity $$O(\sqrt{n})$$ , and a protocol by Kerenidis et al. (QIC 2016) with communication complexity $$O(\log (n))$$ , and O(n) shared entanglement.We show that, in a sense, input purification is the only potent adversarial strategy, and protocols such as the two protocols above are secure in a restricted variant of the quantum honest but curious (a.k.a specious) model. More explicitly, we propose a restricted privacy notion called anchored privacy, where the adversary is forced to execute on a classical database (i.e. the execution is anchored to a classical database). We show that for measurement-free protocols, anchored security against honest adversarial servers implies anchored privacy even against specious adversaries.Finally, we prove that even with (unlimited) pre-shared entanglement it is impossible to achieve security in the standard specious model with sub-linear communication, thus further substantiating the necessity of our relaxation. This lower bound may be of independent interest (in particular recalling that PIR is a special case of Fully Homomorphic Encryption).
2019
TCC
Adaptively Secure Garbling Schemes for Parallel Computations
Kai-Min Chung Luowen Qian
We construct the first adaptively secure garbling scheme based on standard public-key assumptions for garbling a circuit $$C: \{0, 1\}^n \mapsto \{0, 1\}^m$$ that simultaneously achieves a near-optimal online complexity $$n + m + \textsf {poly} (\lambda , \log |C|)$$ (where $$\lambda $$ is the security parameter) and preserves the parallel efficiency for evaluating the garbled circuit; namely, if the depth of C is d, then the garbled circuit can be evaluated in parallel time $$d \cdot \textsf {poly} (\log |C|, \lambda )$$ . In particular, our construction improves over the recent seminal work of [GS18], which constructs the first adaptively secure garbling scheme with a near-optimal online complexity under the same assumptions, but the garbled circuit can only be evaluated gate by gate in a sequential manner. Our construction combines their novel idea of linearization with several new ideas to achieve parallel efficiency without compromising online complexity.We take one step further to construct the first adaptively secure garbling scheme for parallel RAM (PRAM) programs under standard assumptions that preserves the parallel efficiency. Previous such constructions we are aware of is from strong assumptions like indistinguishability obfuscation. Our construction is based on the work of [GOS18] for adaptively secure garbled RAM, but again introduces several new ideas to handle parallel RAM computation, which may be of independent interests. As an application, this yields the first constant round secure computation protocol for persistent PRAM programs in the malicious settings from standard assumptions.
2018
EUROCRYPT
2018
TCC
Game Theoretic Notions of Fairness in Multi-party Coin Toss
Coin toss has been extensively studied in the cryptography literature, and the well-accepted notion of fairness (henceforth called strong fairness) requires that a corrupt coalition cannot cause non-negligible bias. It is well-understood that two-party coin toss is impossible if one of the parties can prematurely abort; further, this impossibility generalizes to multiple parties with a corrupt majority (even if the adversary is computationally bounded and fail-stop only).Interestingly, the original proposal of (two-party) coin toss protocols by Blum in fact considered a weaker notion of fairness: imagine that the (randomized) transcript of the coin toss protocol defines a winner among the two parties. Now Blum’s notion requires that a corrupt party cannot bias the outcome in its favor (but self-sacrificing bias is allowed). Blum showed that this weak notion is indeed attainable for two parties assuming the existence of one-way functions.In this paper, we ask a very natural question which, surprisingly, has been overlooked by the cryptography literature: can we achieve Blum’s weak fairness notion in multi-party coin toss? What is particularly interesting is whether this relaxation allows us to circumvent the corrupt majority impossibility that pertains to strong fairness. Even more surprisingly, in answering this question, we realize that it is not even understood how to define weak fairness for multi-party coin toss. We propose several natural notions drawing inspirations from game theory, all of which equate to Blum’s notion for the special case of two parties. We show, however, that for multiple parties, these notions vary in strength and lead to different feasibility and infeasibility results.
2017
ASIACRYPT
2016
TCC
2016
TCC
2015
EPRINT
2015
EPRINT
2015
EPRINT
2015
TCC
2015
TCC
2015
CRYPTO
2015
CRYPTO
2014
CRYPTO
2014
TCC
2014
TCC
2014
EPRINT
2014
EPRINT
2014
ASIACRYPT
2013
TCC
2013
ASIACRYPT
2012
TCC
2011
CRYPTO
2010
TCC
2010
EPRINT
Improved Delegation of Computation using Fully Homomorphic Encryption
Following Gennaro, Gentry, and Parno (Cryptology ePrint Archive 2009/547), we use fully homomorphic encryption to design improved schemes for delegating computation. In such schemes, a delegator outsources the computation of a function $F$ on many, dynamically chosen inputs $x_i$ to a worker in such a way that it is infeasible for the worker to make the delegator accept a result other than $F(x_i)$. The "online stage" of the Gennaro et al. scheme is very efficient: the parties exchange two messages, the delegator runs in time $poly(log T)$, and the worker runs in time $poly(T)$, where $T$ is the time complexity of $F$. However, the "offline stage" (which depends on the function $F$ but not the inputs to be delegated) is inefficient: the delegator runs in time $poly(T)$ and generates a public key of length $poly(T)$ that needs to be accessed by the worker during the online stage. Our first construction eliminates the large public key from the Gennaro et al. scheme. The delegator still invests $poly(T)$ time in the offline stage, but does not need to communicate or publish anything. Our second construction reduces the work of the delegator in the offline stage to $poly(log T)$ at the price of a 4-message (offline) interaction with a $poly(T)$-time worker (which need not be the same as the workers used in the online stage). Finally, we describe a "pipelined" implementation of the second construction that avoids the need to re-run the offline construction after errors are detected (assuming errors are not too frequent).
2010
ASIACRYPT
2010
CRYPTO

Program Committees

PKC 2020
TCC 2020
TCC 2019
Crypto 2019
Eurocrypt 2019
PKC 2018
TCC 2017
Asiacrypt 2017
Asiacrypt 2015
TCC 2015
Asiacrypt 2014
TCC 2014
Crypto 2013