CryptoDB
Mingqi Lu
Publications
Year
Venue
Title
2024
CRYPTO
Memory-Sample Lower Bounds for LWE
Abstract
In this work, we show memory-sample lower bounds for the fundamental problem of learning with error (LWE). In this problem, a learner tries to learn a uniformly sampled x∼\Znq from a stream of inner products plus errors sampled from discrete Gaussians of scale σ. Any learning algorithm requires either at least Ω(k2/log(q/σ)) bits of memory, or at least 2Ω(k) many samples, where k=Ω(nlog(1/(1−ϕ(q)/q))). This matches with the information-theoretic upper bound when q is prime.
In addition to LWE, our bounds apply to a wide range of learning problems. Following the work of Garg, Raz, Tal [STOC 2018], we describe a learning problem by a learning matrix M:A×X→{0,1,⋯,q−1} together with an error matrix \vare. The learner tries to learn x∼X from a stream of samples, (a1,b1),⋯,(am,bm), where for every i, ai∼A, and bi←t with probability \vareM(a,x),t.
We characterize the learning problems that can have memory-sample lower bounds as ``q-balanced'', which is a generalization of the L2-extractor in [GRT18]. We also show a reduction from q-balanced to L2-extractor, which provides a general way to prove q-balanced and thus apply our bounds.
Our proof builds on [GRT18] and the work of Garg, Kothari, Liu, Raz [APPROX/RANDOM 2021].
Coauthors
- Mingqi Lu (1)
- Junzhao Yang (1)