International Association for Cryptologic Research

International Association
for Cryptologic Research

CryptoDB

Smoothing Out Binary Linear Codes and Worst-case Sub-exponential Hardness for LPN

Authors:
Yu Yu , Shanghai Jiao Tong University, Shanghai Qizhi Institute, and Shanghai Key Laboratory of Privacy-Preserving Computation
Jiang Zhang , State Key Laboratory of Cryptology, China
Download:
DOI: 10.1007/978-3-030-84252-9_16 (login may be required)
Search ePrint
Search Google
Presentation: Slides
Conference: CRYPTO 2021
Abstract: Learning parity with noise (LPN) is a notorious (average-case) hard problem that has been well studied in learning theory, coding theory and cryptography since the early 90's. It further inspires the Learning with Errors (LWE) problem [Regev, STOC 2005], which has become one of the central building blocks for post-quantum cryptography and advanced cryptographic. Unlike LWE whose hardness can be reducible from worst-case lattice problems, no corresponding worst-case hardness results were known for LPN until very recently. At Eurocrypt 2019, Brakerski et al. [BLVW19] established the first feasibility result that the worst-case hardness of nearest codeword problem (NCP) (on balanced linear code) at the extremely low noise rate $\frac{\log^2 n}{n}$ implies the quasi-polynomial hardness of LPN at the extremely high noise rate $1/2-1/\poly(n)$. It remained open whether a worst-case to average-case reduction can be established for standard (constant-noise) LPN, ideally with sub-exponential hardness. We start with a simple observation that the hardness of high-noise LPN over large fields is implied by that of the LWE of the same modulus, and is thus reducible from worst-case hardness of lattice problems. We then revisit [BLVW19], which is the main focus of this work. We first expand the underlying binary linear codes (of the NCP) to not only the balanced code considered in [BLVW19] but also to another code (in some sense dual to balanced code). At the core of our reduction is a new variant of smoothing lemma (for both binary codes) that circumvents the barriers (inherent in the underlying worst-case randomness extraction) and admits tradeoffs for a wider spectrum of parameter choices. In addition to the worst-case hardness result obtained in [BLVW19], we show that for any constant $0
Video from CRYPTO 2021
BibTeX
@inproceedings{crypto-2021-31151,
  title={Smoothing Out Binary Linear Codes and Worst-case Sub-exponential Hardness for LPN},
  publisher={Springer-Verlag},
  doi={10.1007/978-3-030-84252-9_16},
  author={Yu Yu and Jiang Zhang},
  year=2021
}