International Association for Cryptologic Research

International Association
for Cryptologic Research

CryptoDB

Charles Meyer-Hilfiger

Publications

Year
Venue
Title
2024
EUROCRYPT
Reduction from sparse LPN to LPN, Dual Attack 3.0
The security of code-based cryptography relies primarily on the hardness of decoding generic linear codes. Until very recently, all the best algorithms for solving the decoding problem were information set decoders ($\mathsf{ISD}$). However, recently a new algorithm called \RLPN-decoding which relies on a completely different approach was introduced and it has been shown that \RLPN \ outperforms significantly $\mathsf{ISD}$ decoders for a rather large range of rates. This \RLPN \ decoder relies on two ingredients, first reducing decoding to some underlying \LPN{} problem, and then computing efficiently many parity-checks of small weight when restricted to some positions. We revisit \RLPN-decoding by noticing that, in this algorithm, decoding is in fact reduced to a sparse-\LPN{} problem, namely with a secret whose Hamming weight is small. Our new approach consists this time in making an additional reduction from sparse-\LPN{} to plain-\LPN{} with a coding approach inspired by $\mathsf{coded}$-$\mathsf{BKW}$. It outperforms significantly the $\mathsf{ISD}$'s and \RLPN \ for code rates smaller than $\hardestrate$. This algorithm can be viewed as the code based cryptography cousin of recent dual attacks in lattice based cryptography. We depart completely from the traditional analysis of this kind of algorithm which uses a certain number of independence assumptions that have been strongly questioned recently in the latter domain. We give instead a formula for the \LPNs noise relying on duality which allows to analyze the behavior of the algorithm by relying only on the analysis of a certain weight distribution. By using only a minimal assumption whose validity has been verified experimentally we are able to justify the correctness of our algorithm. This key tool, namely the duality formula, can be readily adapted to the lattice setting and is shown to give a simple explanation for some phenomena observed on dual attacks in lattices in \cite{DP23}.
2023
TCC
Rigorous Foundations for Dual Attacks in Coding Theory
Charles Meyer-Hilfiger Jean-Pierre Tillich
Dual attacks aiming at decoding generic linear codes have been found recently to outperform for certain parameters information set decoding techniques which have been for $60$ years the dominant tool for solving this problem and choosing the parameters of code-based cryptosystems. However, the analysis of the complexity of these dual attacks relies on some unproven assumptions that are not even fully backed up with experimental evidence. These dual attacks can actually be viewed as the code-based analogue of dual attacks in lattice based cryptography. Here too, dual attacks have been found out those past years to be strong competitors to primal attacks and a controversy has emerged whether similar heuristics made for instance on the independence of certain random variables really hold. We will show that the dual attacks in coding theory can be studied by providing in a first step a simple alternative expression of the fundamental quantity used in these dual attacks. We then show that this expression can be studied without relying on independence assumptions whatsoever. This study leads us to discover that there is indeed a problem with the latest and most powerful dual attack proposed in \cite{CDMT22} and that for the parameters chosen in this algorithm there are indeed false candidates which are produced and which are not predicted by the analysis provided there which relies on independence assumptions. We then suggest a slight modification of this algorithm consisting in a further verification step, analyze it thoroughly, provide experimental evidence that our analysis is accurate and show that the complexity claims made in \cite{CDMT22} are indeed valid for this modified algorithm. This approach provides a simple methodology for studying rigorously dual attacks which could turn out to be useful for further developing the subject.
2022
ASIACRYPT
Statistical Decoding 2.0: Reducing Decoding to LPN 📺
The security of code-based cryptography relies primarily on the hardness of generic decoding with linear codes. The best generic decoding algorithms are all improvements of an old algorithm due to Prange: they are known under the name of information set decoders (ISD). A while ago, a generic decoding algorithm which does not belong to this family was proposed: statistical decoding. It is a randomized algorithm that requires the computation of a large set of parity-checks of moderate weight, and uses some kind of majority voting on these equations to recover the error. This algorithm was long forgotten because even the best variants of it performed poorly when compared to the simplest ISD algorithm. We revisit this old algorithm by using parity-check equations in a more general way. Here the parity-checks are used to get LPN samples with a secret which is part of the error and the LPN noise is related to the weight of the parity-checks we produce. The corresponding LPN problem is then solved by standard Fourier techniques. By properly choosing the method of producing these low weight equations and the size of the LPN problem, we are able to outperform in this way significantly information set decodings at code rates smaller than 0.3. It gives for the first time after 60 years, a better decoding algorithm for a significant range which does not belong to the ISD family.