IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
26 April 2024
Jingwen Chen, Qun Liu, Yanhong Fan, Lixuan Wu, Boyun Li, Meiqin Wang
In recent years, quantum technology has been rapidly developed. As security analyses for symmetric ciphers continue to emerge, many require an evaluation of the resources needed for the quantum circuit implementation of the encryption algorithm. In this regard, we propose the quantum circuit decision problem, which requires us to determine whether there exists a quantum circuit for a given permutation f using M ancilla qubits and no more than K quantum gates within the circuit depth D. Firstly, we investigate heuristic algorithms and classical SAT-based models in previous works, revealing their limitations in solving the problem. Hence, we innovatively propose an improved SAT-based model incorporating three metrics of quantum circuits. The model enables us to find the optimal quantum circuit of an arbitrary 3 or 4-bit S-box under a given optimization goal based on SAT solvers, which has proved the optimality of circuits constructed by the tool, LIGHTER-R. Then, by combining different criteria in the model, we find more compact quantum circuit implementations of S-boxes such as RECTANGLE and GIFT. For GIFT S-box, our model provides the optimal quantum circuit that only requires 8 gates with a depth of 31. Furthermore, our model can be generalized to linear layers and improve the previous SAT-based model proposed by Huang et al. in ASIACRYPT 2022 by adding the criteria on the number of qubits and the circuit depth.
Huiqiang Liang, Haining Lu, Geng Wang
Machine learning as a service requires the client to trust the server and provide its own private information to use this service. Usually, clients may worry that their private data is being collected by server without effective supervision, the server aims to ensure proper management of the user data, thereby fostering the advancement of its services. In this work, we focus on private decision tree evaluation (PDTE) which can alleviates the privacy concerns associated with classification tasks using decision tree. After the evaluation, except for some hyperparameters, the client only receives the classification results of their private data from server, while the server learns nothing.
Firstly, we propose three amortized efficient private comparison algorithms: TECMP, RDCMP and CDCMP, which are based on leveled homomorphic encryption. They are non-interactive, high precision (up to 26624-bit), many-to-many, and output expressive, achieving an amortized cost of less than 1 ms under 32-bit, which is an order of magnitude faster than the state-of-the-art. Secondly, we propose three batch PDTE schemes using our private comparison: TECMP-PDTE, RDCMP-PDTE and CDCMP-PDTE. Due to the batch operations, we utilized a clear rows relation (CRR) algorithm, which obfuscates the position and classification results of the different row data. Finally, in decision tree exceeding 1000 nodes with 16-bit each, the amortized runtime of TECMP-PDTE and RDCMP-PDTE both more than 56$\times$ faster than state-of-the-art, while the TECMP-PDTE with CRR still achieves 14$\times$ speedup. Even in a single row and a tree of fewer than 100 nodes with 64-bit, and the TECMP-PDTE maintains a comparable performance with the current work.
Firstly, we propose three amortized efficient private comparison algorithms: TECMP, RDCMP and CDCMP, which are based on leveled homomorphic encryption. They are non-interactive, high precision (up to 26624-bit), many-to-many, and output expressive, achieving an amortized cost of less than 1 ms under 32-bit, which is an order of magnitude faster than the state-of-the-art. Secondly, we propose three batch PDTE schemes using our private comparison: TECMP-PDTE, RDCMP-PDTE and CDCMP-PDTE. Due to the batch operations, we utilized a clear rows relation (CRR) algorithm, which obfuscates the position and classification results of the different row data. Finally, in decision tree exceeding 1000 nodes with 16-bit each, the amortized runtime of TECMP-PDTE and RDCMP-PDTE both more than 56$\times$ faster than state-of-the-art, while the TECMP-PDTE with CRR still achieves 14$\times$ speedup. Even in a single row and a tree of fewer than 100 nodes with 64-bit, and the TECMP-PDTE maintains a comparable performance with the current work.
Yuncong Zhang, Shi-Feng Sun, Dawu Gu
We propose a novel KZG-based sum-check scheme, dubbed $\mathsf{Losum}$, with optimal efficiency. Particularly, its proving cost is one multi-scalar-multiplication of size $k$---the number of non-zero entries in the vector, its verification cost is one pairing plus one group scalar multiplication, and the proof consists of only one group element.
Using $\mathsf{Losum}$ as a component, we then construct a new lookup argument, named $\mathsf{Locq}$, which enjoys a smaller proof size and a lower verification cost compared to the state of the arts $\mathsf{cq}$, $\mathsf{cq}$+ and $\mathsf{cq}$++. Specifically, the proving cost of $\mathsf{Locq}$ is comparable to $\mathsf{cq}$, keeping the advantage that the proving cost is independent of the table size after preprocessing. For verification, $\mathsf{Locq}$ costs four pairings, while $\mathsf{cq}$, $\mathsf{cq}$+ and $\mathsf{cq}$++ require five, five and six pairings, respectively. For proof size, a $\mathsf{Locq}$ proof consists of four $\mathbb{G}_1$ elements and one $\mathbb{G}_2$ element; when instantiated with the BLS12-381 curve, the proof size of $\mathsf{Locq}$ is $2304$ bits, while $\mathsf{cq}$, $\mathsf{cq}$+ and $\mathsf{cq}$++ have $3840$, $3328$ and $2944$ bits, respectively. Moreover, $\mathsf{Locq}$ is zero-knowledge as $\mathsf{cq}$+ and $\mathsf{cq}$++, whereas $\mathsf{cq}$ is not. $\mathsf{Locq}$ is more efficient even compared to the non-zero-knowledge (and more efficient) versions of $\mathsf{cq}$+ and $\mathsf{cq}$++.
Using $\mathsf{Losum}$ as a component, we then construct a new lookup argument, named $\mathsf{Locq}$, which enjoys a smaller proof size and a lower verification cost compared to the state of the arts $\mathsf{cq}$, $\mathsf{cq}$+ and $\mathsf{cq}$++. Specifically, the proving cost of $\mathsf{Locq}$ is comparable to $\mathsf{cq}$, keeping the advantage that the proving cost is independent of the table size after preprocessing. For verification, $\mathsf{Locq}$ costs four pairings, while $\mathsf{cq}$, $\mathsf{cq}$+ and $\mathsf{cq}$++ require five, five and six pairings, respectively. For proof size, a $\mathsf{Locq}$ proof consists of four $\mathbb{G}_1$ elements and one $\mathbb{G}_2$ element; when instantiated with the BLS12-381 curve, the proof size of $\mathsf{Locq}$ is $2304$ bits, while $\mathsf{cq}$, $\mathsf{cq}$+ and $\mathsf{cq}$++ have $3840$, $3328$ and $2944$ bits, respectively. Moreover, $\mathsf{Locq}$ is zero-knowledge as $\mathsf{cq}$+ and $\mathsf{cq}$++, whereas $\mathsf{cq}$ is not. $\mathsf{Locq}$ is more efficient even compared to the non-zero-knowledge (and more efficient) versions of $\mathsf{cq}$+ and $\mathsf{cq}$++.
Hongxiao Wang, Siu-Ming Yiu, Yanmin Zhao, Zoe L. Jiang, Min Xie
Vector commitments gain a lot of attention because of their wide usage in applications such as blockchain and accumulator. Mercurial vector commitments and mercurial functional commitments (MFC), as significant variants of VC, are the central techniques to construct more advanced cryptographic primitives such as zero-knowledge set and zero-knowledge functional elementary database (ZK-FEDB).
However, the current MFC only supports linear functions, limiting its application, i.e. building the ZK-FEDB that only supports linear function queries. Besides, to our best knowledge, the existing MFC and ZK-FEDBs, including the one proposed by Zhang and Deng (ASIACRYPT '23) using RSA accumulators, are all in the group model and cannot resist the attack of quantum computers.
To break these limitations, we formalize the first system model and security model of MFC for circuits. Then, we target some specific properties of a new falsifiable assumption, i.e. the $\mathsf{BASIS}$ assumption proposed by Wee and Wu (EUROCRYPT '23) to construct the first lattice-based succinct mercurial functional commitment for circuits. To the application, we show that our constructions can be used to build the first lattice-based ZK-FEDB directly within the existing generic framework.
To break these limitations, we formalize the first system model and security model of MFC for circuits. Then, we target some specific properties of a new falsifiable assumption, i.e. the $\mathsf{BASIS}$ assumption proposed by Wee and Wu (EUROCRYPT '23) to construct the first lattice-based succinct mercurial functional commitment for circuits. To the application, we show that our constructions can be used to build the first lattice-based ZK-FEDB directly within the existing generic framework.
Hyeonbum Lee, Seunghun Paik, Hyunjung Son, Jae Hong Seo
An inner product argument (IPA) is a cryptographic primitive used to construct a zero-knowledge proof (ZKP) system, which is a notable privacy-enhancing technology. We propose a novel efficient IPA called $\mathsf{Cougar}$. $\mathsf{Cougar}$ features cubic root verifier and logarithmic communication under the discrete logarithm (DL) assumption. At Asiacrypt2022, Kim et al. proposed two square root verifier IPAs under the DL assumption. Our main objective is to overcome the limitation of square root complexity in the DL setting. To achieve this, we combine two distinct square root IPAs from Kim et al.: one with pairing ($\mathsf{Protocol3}$) and one without pairing ($\mathsf{Protocol4}$). To construct $\mathsf{Cougar}$, we first revisit $\mathsf{Protocol4}$ and reconstruct it to make it compatible with the proof system for the homomorphic commitment scheme. Next, we utilize $\mathsf{Protocol3}$ as the proof system for the reconstructed $\mathsf{Protocol4}$. Furthermore, we provide a soundness proof for $\mathsf{Cougar}$ in the DL assumption.
Jialiu Cheng, Yi Wang, Rongmao Chen, Xinyi Huang
The revelations of Edward Snowden in 2013 rekindled concerns within the cryptographic community regarding the potential subversion of cryptographic systems. Bellare et al. (CRYPTO'14) introduced the notion of Algorithm Substitution Attacks (ASAs), which aim to covertly leak sensitive information by undermining individual cryptographic primitives. In this work, we delve deeply into the realm of ASAs against protocols built upon cryptographic primitives. In particular, we revisit the existing ASA model proposed by Berndt et al. (AsiaCCS'22), providing a more fine-grained perspective. We introduce a novel ASA model tailored for protocols, capable of capturing a wide spectrum of subversion attacks. Our model features a modular representation of subverted parties within protocols, along with fine-grained definitions of undetectability. To illustrate the practicality of our model, we applied it to Lindell's two-party ECDSA protocol (CRYPTO'17), unveiling a range of ASAs targeting the protocol's parties with the objective of extracting secret key shares. Our work offers a comprehensive ASA model suited to cryptographic protocols, providing a useful framework for understanding ASAs against protocols.
Foteini Baldimtsi, Jiaqi Cheng, Rishab Goyal, Aayush Yadav
Blind signatures enable a receiver to obtain signatures on messages of its choice without revealing any message to the signer. Round-optimal blind signatures are designed as a two-round interactive protocol between a signer and receiver. Coincidentally, the choice of message is not important in many applications, and is routinely set as a random (unstructured) message by a receiver.
With the goal of designing more efficient blind signatures for such applications, Hanzlik (Eurocrypt '23) introduced a new variant called non-interactive blind signatures (NIBS). These allow a signer to asynchronously generate partial signatures for any recipient such that only the intended recipient can extract a blinded signature for a random message. This bypasses the two-round barrier for traditional blind signatures, yet enables many known applications.
Hanzlik provided new practical designs for NIBS from bilinear pairings. In this work, we investigate efficient NIBS with post-quantum security. We design the first practical NIBS, as well as non-interactive partially blind signatures called tagged NIBS, from lattice-based assumptions. We also propose a new generic paradigm for NIBS from circuit-private leveled homomorphic encryption achieving optimal-sized signatures (i.e., same as any non-blind signature). Finally, we propose new enhanced security properties for NIBS, that could be of practical and theoretical interest.
22 April 2024
Jie Xie, Yuncong Hu, Yu Yu
Hadamard product is a point-wise product for two vectors. This paper presents a new scheme to prove Hadamard-product relation as a sub-protocol for SNARKs based on univariate polynomials. Prover uses linear cryptographic operations to generate the proof containing logarithmic field elements. The verification takes logarithmic cryptographic operations with constant numbers of pairings in bilinear group. The construction of the scheme is based on the Lagrange-based KZG commitments (Kate, Zaverucha, and Goldberg
at Asiacrypt 2010) and the folding technique. We construct an inner-product protocol from folding technique on univariate polynomials in Lagrange form, and by carefully choosing the random polynomials suitable for folding technique, we construct a Hadamard-product protocol from the inner-product protocol, giving an alternative to prove linear algebra relations in linear time, and the protocol has a better concrete proof size than previous works.
Gurgen Arakelov, Nikita Kaskov, Daria Pianykh, Yuriy Polyakov
Fully Homomorphic Encryption (FHE) is a powerful Privacy-Enhancing Technology (PET) that enables computations on encrypted data without having access to the secret key. While FHE holds immense potential for enhancing data privacy and security, creating its practical applications is associated with many difficulties. A significant barrier is the absence of easy-to-use, standardized components that developers can utilize as foundational building blocks. Addressing this gap requires constructing a comprehensive library of FHE components, a complex endeavor due to multiple inherent problems. We propose a competition-based approach for building such a library. More concretely, we present FHERMA, a new challenge platform that introduces black-box and white-box challenges, and fully automated evaluation of submitted FHE solutions. The initial challenges on the FHERMA platform are motivated by practical problems in machine learning and blockchain. The winning solutions get integrated into an open-source library of FHE components, which is available to all members of the PETs community under the Apache 2.0 license.
Ward Beullens, Pierre Briaud, Morten Øygarden
Restricted syndrome decoding problems (R-SDP and R-SDP($G$)) provide an interesting basis for post-quantum cryptography. Indeed, they feature in CROSS, a submission in the ongoing process for standardizing post-quantum signatures.
This work improves our understanding of the security of both problems. Firstly, we propose and implement a novel collision attack on R-SDP($G$) that provides the best attack under realistic restrictions on memory. Secondly, we derive precise complexity estimates for algebraic attacks on R-SDP that are shown to be accurate by our experiments. We note that neither of these improvements threatens the updated parameters of CROSS.
This work improves our understanding of the security of both problems. Firstly, we propose and implement a novel collision attack on R-SDP($G$) that provides the best attack under realistic restrictions on memory. Secondly, we derive precise complexity estimates for algebraic attacks on R-SDP that are shown to be accurate by our experiments. We note that neither of these improvements threatens the updated parameters of CROSS.
Min Xie, Peichen Ju, Yanqi Zhao, Zoe L. Jiang, Junbin Fang, Yong Yu, Xuan Wang
Delegatable Anonymous Credentials (DAC) are an enhanced Anonymous Credentials (AC) system that allows credential owners to use credentials anonymously, as well as anonymously delegate them to other users. In this work, we introduce a new concept called Delegatable Attribute-based Anonymous Credentials with Chainable Revocation (DAAC-CR), which extends the functionality of DAC by allowing 1) fine-grained attribute delegation, 2) issuers to restrict the delegation capabilities of the delegated users at a fine-grained level, including the depth of delegation and the sets of delegable attributes, and 3) chainable revocation, meaning if a credential within the delegation chain is revoked, all subsequent credentials derived from it are also invalid.
We provide a practical DAAC-CR instance based on a novel primitive that we identify as structure-preserving signatures on equivalence classes on vector commitments (SPSEQ-VC). This primitive may be of independent interest, and we detail an efficient construction. Compared to traditional DAC systems that rely on non-interactive zero-knowledge (NIZK) proofs, the credential size in our DAAC-CR instance is constant, independent of the length of delegation chain and the number of attributes. We formally prove the security of our scheme in the generic group model and demonstrate its practicality through performance benchmarks.
We provide a practical DAAC-CR instance based on a novel primitive that we identify as structure-preserving signatures on equivalence classes on vector commitments (SPSEQ-VC). This primitive may be of independent interest, and we detail an efficient construction. Compared to traditional DAC systems that rely on non-interactive zero-knowledge (NIZK) proofs, the credential size in our DAAC-CR instance is constant, independent of the length of delegation chain and the number of attributes. We formally prove the security of our scheme in the generic group model and demonstrate its practicality through performance benchmarks.
Benoît Cogliati, Pierre-Alain Fouque, Louis Goubin, Brice Minaud
Hash-and-Sign with Retry is a popular technique to design efficient signature schemes from code-based or multivariate assumptions. Contrary to Hash-and-Sign signatures based on preimage-sampleable functions as defined by Gentry, Peikert and Vaikuntanathan (STOC 2008), trapdoor functions in code-based and multivariate schemes are not surjective. Therefore, the standard approach uses random trials. Kosuge and Xagawa (PKC 2024) coined it the Hash-and-Sign with Retry paradigm.
As many attacks have appeared on code-based and multivariate schemes, we think it is important for the ongoing NIST competition to look at the security proofs of these schemes. The original proof of Sakumoto, Shirai, and Hiwatari (PQCrypto 2011) was flawed, then corrected by Chatterjee, Das and Pandit (INDOCRYPT 2022). The fix is still not sufficient, as it only works for very large finite fields. A new proof in the Quantum ROM model was proposed by Kosuge and Xagawa (PKC 2024), but it is rather loose, even when restricted to the classical setting.
In this paper, we introduce several tools that yield tighter security bounds for Hash-and-Sign with Retry signatures in the classical setting. These include the Hellinger distance, stochastic dominance arguments, and a new combinatorial tool to transform a proof in the non-adaptative setting to the adaptative setting. Ultimately, we obtain a sharp bound for the security of Hash-and-Sign with Retry signatures, applicable to various code-based and multivariate schemes. Focusing on NIST candidates, we apply these results to the MAYO, PROV, and modified UOV signature schemes. In most cases, our bounds are tight enough to apply with the real parameters of those schemes; in some cases, smaller parameters would suffice.
As many attacks have appeared on code-based and multivariate schemes, we think it is important for the ongoing NIST competition to look at the security proofs of these schemes. The original proof of Sakumoto, Shirai, and Hiwatari (PQCrypto 2011) was flawed, then corrected by Chatterjee, Das and Pandit (INDOCRYPT 2022). The fix is still not sufficient, as it only works for very large finite fields. A new proof in the Quantum ROM model was proposed by Kosuge and Xagawa (PKC 2024), but it is rather loose, even when restricted to the classical setting.
In this paper, we introduce several tools that yield tighter security bounds for Hash-and-Sign with Retry signatures in the classical setting. These include the Hellinger distance, stochastic dominance arguments, and a new combinatorial tool to transform a proof in the non-adaptative setting to the adaptative setting. Ultimately, we obtain a sharp bound for the security of Hash-and-Sign with Retry signatures, applicable to various code-based and multivariate schemes. Focusing on NIST candidates, we apply these results to the MAYO, PROV, and modified UOV signature schemes. In most cases, our bounds are tight enough to apply with the real parameters of those schemes; in some cases, smaller parameters would suffice.
Zhengjun Cao, Lihua Liu
The coexistence of RSA and elliptic curve cryptosystem (ECC) had continued over forty years. It is well-known that ECC has the advantage of shorter key than RSA, which often leads a newcomer to assume that ECC runs faster. In this report, we generate the Mathematica codes for RSA-2048 and ECC-256, which visually show that RSA-2048 runs three times faster than ECC-256. It is also estimated that RSA-2048 runs 48,000 times faster than Weil pairing with 2 embedding degree and a fixed point.
Truman Welling, Onur Gunlu, Aylin Yener
This paper considers an information theoretic model of secure integrated sensing and communication, represented as a wiretap channel with action dependent states. This model allows one to secure a part of the transmitted message against a sensed target that eavesdrops the communication, while allowing transmitter actions to change the channel statistics. An exact secrecy-distortion region is given for a physically-degraded channel. Moreover, a finite-length achievability region is established for the model using an output statistics of random binning method, giving an achievable bound for low-latency applications.
Sam Gunn, Yael Tauman Kalai, Anand Natarajan, Agi Villanyi
We define the notion of a classical commitment scheme to quantum states, which allows a quantum prover to compute a classical commitment to a quantum state, and later open each qubit of the state in either the standard or the Hadamard basis. Our notion is a strengthening of the measurement protocol from Mahadev (STOC 2018). We construct such a commitment scheme from the post-quantum Learning With Errors (LWE) assumption, and more generally from any noisy trapdoor claw-free function family that has the distributional strong adaptive hardcore bit property (a property that we define in this work).
Our scheme is succinct in the sense that the running time of the verifier in the commitment phase depends only on the security parameter (independent of the size of the committed state), and its running time in the opening phase grows only with the number of qubits that are being opened (and the security parameter). As a corollary we obtain a classical succinct argument system for QMA under the post-quantum LWE assumption. Previously, this was only known assuming post-quantum secure indistinguishability obfuscation. As an additional corollary we obtain a generic way of converting any X/Z quantum PCP into a succinct argument system under the quantum hardness of LWE.
Léo Perrin
We have investigated both the padding scheme and the applicability of algebraic attacks to both XHash8 and XHash12. The only vulnerability of the padding scheme we can find is plausibly applicable only in the multi-rate setting---for which the authors make no claim---and is safe otherwise.
For algebraic attack relying on the computation and exploitation of a Gröbner basis, our survey of the literature suggests to base a security argument on the complexity of the variable elimination step rather than that of the computation of the Gröbner basis itself. Indeed, it turns out that the latter complexity is hard to estimate---and is sometimes litteraly non-existent. Focusing on the elimination step, we propose a generalization of the "FreeLunch" approach which, under a reasonable conjecture about the behaviour of the degree of polynomial ideals of dimension 0, is sufficient for us to argue that both XHash8 and XHash12 are safe against such attacks.
We implemented a simplified version of the generation (and resolution) of the corresponding set of equations in SAGE, which allowed us to validate our conjecture at least experimentally, and in fact to show that the lower bound it provides on the ideal degree is not tight---meaning we are a priori understimating the security of these permutations against the algebraic attacks we consider.
At this stage, if used as specified, these hash functions seem safe from Gröbner bases-based algebraic attacks.
For algebraic attack relying on the computation and exploitation of a Gröbner basis, our survey of the literature suggests to base a security argument on the complexity of the variable elimination step rather than that of the computation of the Gröbner basis itself. Indeed, it turns out that the latter complexity is hard to estimate---and is sometimes litteraly non-existent. Focusing on the elimination step, we propose a generalization of the "FreeLunch" approach which, under a reasonable conjecture about the behaviour of the degree of polynomial ideals of dimension 0, is sufficient for us to argue that both XHash8 and XHash12 are safe against such attacks.
We implemented a simplified version of the generation (and resolution) of the corresponding set of equations in SAGE, which allowed us to validate our conjecture at least experimentally, and in fact to show that the lower bound it provides on the ideal degree is not tight---meaning we are a priori understimating the security of these permutations against the algebraic attacks we consider.
At this stage, if used as specified, these hash functions seem safe from Gröbner bases-based algebraic attacks.
Xiaoyang Dong, Boxin Zhao, Lingyue Qin, Qingliang Hou, Shun Zhang, Xiaoyun Wang
This paper proposes general meet-in-the-middle (MitM) attack frameworks for preimage and collision attacks on hash functions based on (generalized) sponge construction.
As the first contribution, our MitM preimage attack framework covers a wide range of sponge-based hash functions, especially those with lower claimed security level for preimage compared to their output size. Those hash functions have been very widely standardized (e.g., Ascon-Hash, PHOTON, etc.), but are rarely studied against preimage attacks. Even the recent MitM attack framework on sponge construction by Qin et al. (EUROCRYPT 2023) cannot attack those hash functions. As the second contribution, our MitM collision attack framework shows a different tool for the collision cryptanalysis on sponge construction, while previous collision attacks on sponge construction are mainly based on differential attacks.
Most of the results in this paper are the first third-party cryptanalysis results. If cryptanalysis previously existed, our new results significantly improve the previous results, such as improving the previous 2-round collision attack on Ascon-Hash to the current 4 rounds, improving the previous 3.5-round quantum preimage attack on SPHINCS$^+$-Haraka to our 4-round classical preimage attack, etc.
Divesh Aggarwal, Leong Jin Ming, Alexandra Veliche
In this work, we study the worst-case to average-case hardness of the Learning with Errors problem (LWE) under an alternative measure of hardness $−$ the maximum success probability achievable by a probabilistic polynomial-time (PPT) algorithm. Previous works by Regev (STOC 2005), Peikert (STOC 2009), and Brakerski, Peikert, Langlois, Regev, Stehle (STOC 2013) give worst-case to average-case reductions from lattice problems, specifically the approximate decision variant of the
Shortest Vector Problem (GapSVP) and the Bounded Distance Decoding (BDD) problem, to LWE. These reductions, however, are lossy in the sense that even the strongest assumption on the worst-case hardness of GapSVP or BDD implies only mild hardness of LWE. Our alternative perspective gives a much tighter reduction and strongly relates the hardness of LWE to that of BDD. In particular, we show that under a reasonable assumption about the success probability of solving BDD via a PPT algorithm, we obtain a nearly tight lower bound on the highest possible success probability for solving LWE via a PPT algorithm. Furthermore, we show a tight relationship between the best achievable success probability by any probabilistic polynomial-time algorithm for decision-LWE to that of search-LWE. Our results not only refine our understanding of the computational complexity of LWE, but also provide a useful framework for analyzing the practical security implications.
Amos Beimel, Oriol Farràs, Oded Nir
In a secret-sharing scheme, a secret is shared among $n$ parties such that the secret can be recovered by authorized coalitions, while it should be kept hidden from unauthorized coalitions. In this work we study secret-sharing for $k$-slice access structures, in which coalitions of size $k$ are either authorized or not, larger coalitions are authorized and smaller are unauthorized. Known schemes for these access structures had smaller shares for small $k$'s than for large ones; hence our focus is on "high" $(n-k)$-slices where $k$ is small.
Our work is inspired by several motivations: 1) Obtaining efficient schemes (with perfect or computational security) for natural families of access structures; 2) Making progress in the search for better schemes for general access structures, which are often based on schemes for slice access structures; 3) Proving or disproving the conjecture by Csirmaz (J. Math. Cryptol., 2020) that an access structures and its dual can be realized by secret-sharing schemes with the same share size.
The main results of this work are: - Perfect schemes for high slices. We present a scheme for $(n-k)$-slices with information-theoretic security and share size $kn\cdot 2^{\tilde{O}(\sqrt{k \log n})}$. Using a different scheme with slightly larger shares, we prove that the ratio between the optimal share size of $k$-slices and that of their dual $(n-k)$-slices is bounded by $n$. - Computational schemes for high slices. We present a scheme for $(n-k)$-slices with computational security and share size $O(k^2 \lambda \log n)$ based on the existence of one-way functions. Our scheme makes use of a non-standard view point on Shamir secret-sharing that allows to share many secrets with different thresholds with low cost. - Multislice access structures. $(a:b)$-multislices are access structures that behave similarly to slices, but are unconstrained on coalitions in a wider range of cardinalities between $a$ and $b$. We use our new schemes for high slices to realize multislices with the same share sizes that their duals have today. This solves an open question raised by Applebaum and Nir (Crypto, 2021), and allows to realize hypergraph access structures that are chosen uniformly at random under a natural set of distributions with share size $2^{0.491n+o(n)}$ compared to the previous result of $2^{0.5n+o(n)}$.
Our work is inspired by several motivations: 1) Obtaining efficient schemes (with perfect or computational security) for natural families of access structures; 2) Making progress in the search for better schemes for general access structures, which are often based on schemes for slice access structures; 3) Proving or disproving the conjecture by Csirmaz (J. Math. Cryptol., 2020) that an access structures and its dual can be realized by secret-sharing schemes with the same share size.
The main results of this work are: - Perfect schemes for high slices. We present a scheme for $(n-k)$-slices with information-theoretic security and share size $kn\cdot 2^{\tilde{O}(\sqrt{k \log n})}$. Using a different scheme with slightly larger shares, we prove that the ratio between the optimal share size of $k$-slices and that of their dual $(n-k)$-slices is bounded by $n$. - Computational schemes for high slices. We present a scheme for $(n-k)$-slices with computational security and share size $O(k^2 \lambda \log n)$ based on the existence of one-way functions. Our scheme makes use of a non-standard view point on Shamir secret-sharing that allows to share many secrets with different thresholds with low cost. - Multislice access structures. $(a:b)$-multislices are access structures that behave similarly to slices, but are unconstrained on coalitions in a wider range of cardinalities between $a$ and $b$. We use our new schemes for high slices to realize multislices with the same share sizes that their duals have today. This solves an open question raised by Applebaum and Nir (Crypto, 2021), and allows to realize hypergraph access structures that are chosen uniformly at random under a natural set of distributions with share size $2^{0.491n+o(n)}$ compared to the previous result of $2^{0.5n+o(n)}$.
Henry Bambury, Phong Q. Nguyen
Lattice-based cryptography typically uses lattices with special properties
to improve efficiency. We show how blockwise reduction can exploit lattices with special geometric properties, effectively reducing the required blocksize to solve the shortest vector problem to half of the lattice's rank, and in the case of the hypercubic lattice $\mathbb{Z}^n$, further relaxing the approximation factor of blocks to $\sqrt{2}$.
We study both provable algorithms and the heuristic well-known primal attack, in the case where the lattice has a first minimum that is almost as short as that of the hypercubic lattice $\mathbb{Z}^n$.
Remarkably, these near-hypercubic lattices cover Falcon and most concrete instances of the NTRU cryptosystem:
this is the first provable result showing that breaking NTRU lattices can be reduced to finding shortest lattice vectors in halved dimension, thereby providing a positive response to a conjecture of Gama, Howgrave-Graham and Nguyen at Eurocrypt 2006.