International Association for Cryptologic Research

International Association
for Cryptologic Research

CryptoDB

Wei-Kai Lin

ORCID: 0000-0001-6012-7124

Publications

Year
Venue
Title
2023
EUROCRYPT
NanoGRAM: Garbled RAM with $\widetilde{O}(\log N)$ Overhead
Andrew Park Wei-Kai Lin Elaine Shi
We propose a new garbled RAM construction called NanoGRAM, which achieves an amortized cost of $\widetilde{O}(\lambda \cdot (W \log N + \log^3 N))$ bits per memory access, where $\lambda$ is the security parameter, $W$ is the block size, and $N$ is the total number of blocks, and $\widetilde{O}(\cdot)$ hides $poly\log\log$ factors. For sufficiently large blocks where $W = \Omega(\log^2 N)$, our scheme achieves $\widetilde{O}(\lambda \cdot W \log N)$ cost per memory access, where the dependence on $N$ is optimal (barring $poly\log\log$ factors), in terms of the evaluator's runtime. Our asymptotical performance matches even the {\it interactive} state-of-the-art (modulo $poly\log\log$ factors), that is, running Circuit ORAM atop garbled circuit, and yet we remove the logarithmic number of interactions necessary in this baseline. Furthermore, we achieve asymptotical improvement over the recent work of Heath et al.~(Eurocrypt '22). Our scheme adopts the same assumptions as the mainstream literature on practical garbled circuits, i.e., circular correlation-robust hashes or a random oracle. We evaluate the concrete performance of NanoGRAM and compare it with a couple of baselines that are asymptotically less efficient. We show that NanoGRAM starts to outperform the na\"ive linear-scan garbled RAM at a memory size of $N = 2^9$ and starts to outperform the recent construction of Heath et al. at $N = 2^{13}$. Finally, as a by product, we also show the existence of a garbled RAM scheme assuming only one-way functions, with an amortized cost of $\widetilde{O}(\lambda^2 \cdot (W \log N + \log^3 N))$ per memory access. Again, the dependence on $N$ is nearly optimal for blocks of size $W = \Omega(\log^2 N)$ bits.
2023
EUROCRYPT
Optimal Single-Server Private Information Retrieval
We construct a single-server pre-processing Private Information Retrieval (PIR) scheme with optimal bandwidth and server computation (up to poly-logarithmic factors), assuming the hardness of the Learning With Errors (LWE) problem. Our scheme achieves amortized $\widetilde{O}_{\lambda}(\sqrt{n})$ server and client computation and $\widetilde{O}_\lambda(1)$ bandwidth per query, completes in a single roundtrip, and requires $\widetilde{O}_\lambda(\sqrt{n})$ client storage. In particular, we achieve a significant reduction in bandwidth over the state-of-the-art scheme by Corrigan-Gibbs, Henzinger, and Kogan (Eurocrypt'22): their scheme requires as much as $\widetilde{O}_{\lambda}(\sqrt{n})$ bandwidth per query, with comparable computational and storage overhead as ours.
2023
JOFC
Oblivious RAM with Worst-Case Logarithmic Overhead
We present the first Oblivious RAM (ORAM) construction that for N memory blocks supports accesses with worst-case $$O(\log N)$$ O ( log N ) overhead for any block size $$\Omega (\log N)$$ Ω ( log N ) while requiring a client memory of only a constant number of memory blocks. We rely on the existence of one-way functions and guarantee computational security. Our result closes a long line of research on fundamental feasibility results for ORAM constructions as logarithmic overhead is necessary. The previous best logarithmic overhead construction only guarantees it in an amortized sense, i.e., logarithmic overhead is achieved only for long enough access sequences, where some of the individual accesses incur $$\Theta (N)$$ Θ ( N ) overhead. The previously best ORAM in terms of worst-case overhead achieves $$O(\log ^2 N/\log \log N)$$ O ( log 2 N / log log N ) overhead. Technically, we design a novel de-amortization framework for modern ORAM constructions that use the “shuffled inputs” assumption. Our framework significantly departs from all previous de-amortization frameworks, originating from Ostrovsky and Shoup (STOC’97), that seem to be fundamentally too weak to be applied on modern ORAM constructions.
2021
CRYPTO
A Logarithmic Lower Bound for Oblivious RAM (for all parameters) 📺
Ilan Komargodski Wei-Kai Lin
An Oblivious RAM (ORAM), introduced by Goldreich and Ostrovsky (J. ACM 1996), is a (probabilistic) RAM that hides its access pattern, i.e., for every input the observed locations accessed are similarly distributed. In recent years there has been great progress both in terms of upper bounds as well as in terms of lower bounds, essentially pinning down the smallest overhead possible in various settings of parameters. We observe that there is a very natural setting of parameters in which \emph{no} non-trivial lower bound is known, even not ones in restricted models of computation (like the so called balls and bins model). Let $N$ and $w$ be the number of cells and bit-size of cells, respectively, in the RAM that we wish to simulate obliviously. Denote by $b$ the cell bit-size of the ORAM. \emph{All} previous ORAM lower bounds have a multiplicative $w/b$ factor which makes them trivial in many settings of parameters of interest. In this work, we prove a new ORAM lower bound that captures this setting (and in all other settings it is at least as good as previous ones, quantitatively). We show that any ORAM must make (amortized) $$ \Omega\left(\log \left(\frac{Nw}{m}\right)/\log\left(\frac{b}{w}\right)\right) $$ memory probes for every logical operation. Here, $m$ denotes the bit-size of the local storage of the ORAM. Our lower bound implies that logarithmic overhead in accesses is necessary, even if $ b \gg w$. Our lower bound is tight for \emph{all} settings of parameters, up to the $\log(b/w)$ factor. Our bound also extends to the non-colluding multi-server setting. As an application, we derive the first (unconditional) separation between the overhead needed for ORAMs in the \emph{online} vs.\ \emph{offline} models. Specifically, we show that when $w=\log N$ and $b,m \in poly\log N$, there exists an offline ORAM that makes (on average) $o(1)$ memory probes per logical operation while every online one must make $\Omega(\log N/\log\log N)$ memory probes per logical operation. No such previous separation was known for any setting of parameters, not even in the balls and bins model.
2021
CRYPTO
Oblivious RAM with Worst-Case Logarithmic Overhead 📺
We present the first Oblivious RAM (ORAM) construction that for $N$ memory blocks supports accesses with \emph{worst-case} $O(\log N)$ overhead for any block size $\Omega(\log N)$ while requiring a client memory of only a constant number of memory blocks. We rely on the existence of one-way functions and guarantee computational security. Our result closes a long line of research on fundamental feasibility results for ORAM constructions as logarithmic overhead is necessary. The previous best logarithmic overhead construction only guarantees it in an \emph{amortized} sense, i.e., logarithmic overhead is achieved only for long enough access sequences, where some of the individual accesses incur $\Theta(N)$ overhead. The previously best ORAM in terms of \emph{worst-case} overhead achieves $O(\log^2 N/\log\log N)$ overhead. Technically, we design a novel de-amortization framework for modern ORAM constructions that use the ``shuffled inputs'' assumption. Our framework significantly departs from all previous de-amortization frameworks, originating from Ostrovsky and Shoup (STOC~'97), that seem to be fundamentally too weak to be applied on modern ORAM constructions.
2021
CRYPTO
Proof-Carrying Data without Succinct Arguments 📺
Proof-carrying data (PCD) is a powerful cryptographic primitive that enables mutually distrustful parties to perform distributed computations that run indefinitely. Known approaches to construct PCD are based on succinct non-interactive arguments of knowledge (SNARKs) that have a succinct verifier or a succinct accumulation scheme. In this paper we show how to obtain PCD without relying on SNARKs. We construct a PCD scheme given any non-interactive argument of knowledge (e.g., with linear-size arguments) that has a *split accumulation scheme*, which is a weak form of accumulation that we introduce. Moreover, we construct a transparent non-interactive argument of knowledge for R1CS whose split accumulation is verifiable via a (small) *constant number of group and field operations*. Our construction is proved secure in the random oracle model based on the hardness of discrete logarithms, and it leads, via the random oracle heuristic and our result above, to concrete efficiency improvements for PCD. Along the way, we construct a split accumulation scheme for Hadamard products under Pedersen commitments and for a simple polynomial commitment scheme based on Pedersen commitments. Our results are supported by a modular and efficient implementation.
2020
EUROCRYPT
OptORAMa: Optimal Oblivious RAM 📺
Oblivious RAM (ORAM), first introduced in the ground-breaking work of Goldreich and Ostrovsky (STOC '87 and J. ACM '96) is a technique for provably obfuscating programs' access patterns, such that the access patterns leak no information about the programs' secret inputs. To compile a general program to an oblivious counterpart, it is well-known that $\Omega(\log N)$ amortized blowup is necessary, where $N$ is the size of the logical memory. This was shown in Goldreich and Ostrovksy's original ORAM work for statistical security and in a somewhat restricted model (the so called \emph{balls-and-bins} model), and recently by Larsen and Nielsen (CRYPTO '18) for computational security. A long standing open question is whether there exists an optimal ORAM construction that matches the aforementioned logarithmic lower bounds (without making large memory word assumptions, and assuming a constant number of CPU registers). In this paper, we resolve this problem and present the first secure ORAM with $O(\log N)$ amortized blowup, assuming one-way functions. Our result is inspired by and non-trivially improves on the recent beautiful work of Patel et al. (FOCS '18) who gave a construction with $O(\log N\cdot \log\log N)$ amortized blowup, assuming one-way functions. One of our building blocks of independent interest is a linear-time deterministic oblivious algorithm for tight compaction: Given an array of $n$ elements where some elements are marked, we permute the elements in the array so that all marked elements end up in the front of the array. Our $O(n)$ algorithm improves the previously best known deterministic or randomized algorithms whose running time is $O(n \cdot\log n)$ or $O(n \cdot\log \log n)$, respectively.
2018
TCC
Game Theoretic Notions of Fairness in Multi-party Coin Toss
Coin toss has been extensively studied in the cryptography literature, and the well-accepted notion of fairness (henceforth called strong fairness) requires that a corrupt coalition cannot cause non-negligible bias. It is well-understood that two-party coin toss is impossible if one of the parties can prematurely abort; further, this impossibility generalizes to multiple parties with a corrupt majority (even if the adversary is computationally bounded and fail-stop only).Interestingly, the original proposal of (two-party) coin toss protocols by Blum in fact considered a weaker notion of fairness: imagine that the (randomized) transcript of the coin toss protocol defines a winner among the two parties. Now Blum’s notion requires that a corrupt party cannot bias the outcome in its favor (but self-sacrificing bias is allowed). Blum showed that this weak notion is indeed attainable for two parties assuming the existence of one-way functions.In this paper, we ask a very natural question which, surprisingly, has been overlooked by the cryptography literature: can we achieve Blum’s weak fairness notion in multi-party coin toss? What is particularly interesting is whether this relaxation allows us to circumvent the corrupt majority impossibility that pertains to strong fairness. Even more surprisingly, in answering this question, we realize that it is not even understood how to define weak fairness for multi-party coin toss. We propose several natural notions drawing inspirations from game theory, all of which equate to Blum’s notion for the special case of two parties. We show, however, that for multiple parties, these notions vary in strength and lead to different feasibility and infeasibility results.
2017
ASIACRYPT
2016
TCC