IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
02 June 2025
Alexandr Karenin, Elena Kirshanova, Julian Nowakowski, Eamonn W. Postlethwaite, Fernando Virdia
Recently [Wenger et al.~IEEE S\&P 2025] claimed that the `Cool and Cruel' (C+C) approach to solving LWE with sparse secrets [Nolte et al.~AFRICACRYPT 2024] outperforms other approaches, in particular the well established primal attack.
In this work we show that
i.~C+C is an instantiation of a known dual attack [Albrecht, EUROCRYPT 2017], ii.~experimental evidence that the primal attack can outperform C+C in similar regimes to those studied by Wenger et al. and
iii.~both theoretical justification and experimental evidence that C+C is a consequence of a basis profile called the Z-shape.
To prove i.~we introduce a framework for dimension reduction in bounded distance decoding problems that may be of independent interest. For ii.~we provide an open source implementation of the primal attack that is properly parametrised for short, sparse ternary secret LWE and guesses portions of the secret, along with an error analysis for a rounded variant of LWE that proves useful for practical cryptanalysis. Given iii.~we falsify a claim of Nolte et al.
To prove i.~we introduce a framework for dimension reduction in bounded distance decoding problems that may be of independent interest. For ii.~we provide an open source implementation of the primal attack that is properly parametrised for short, sparse ternary secret LWE and guesses portions of the secret, along with an error analysis for a rounded variant of LWE that proves useful for practical cryptanalysis. Given iii.~we falsify a claim of Nolte et al.
Elizabeth Crites, Alistair Stewart
The standard notion of security for threshold signature schemes is static security, where the set of corrupt parties is assumed to be fixed before protocol execution. In this model, the adversary may corrupt up to t−1 out of a threshold of t parties. A stronger notion of security for threshold signatures considers an adaptive adversary, who may corrupt parties dynamically based on its view of the protocol execution, learning the corrupted parties’ secret keys as well as their states. Adaptive security of threshold signatures has become an active area of research recently due to ongoing standardization efforts. Of particular interest is full adaptive security, the analogue of static security, where the adversary
may adaptively corrupt a full t−1 parties.
We present a plausible attack on the full adaptive security of threshold Schnorr signature schemes with public key shares of the form $pk_i = g^{sk_i},$ where all secret keys $sk_i$ lie on a polynomial. We show that a wide range of threshold Schnorr signature schemes, including all variants of FROST, Sparkle, and Lindell’22, cannot be proven fully adaptively secure without modifications or assuming the hardness of a search problem that we define in this work. We then prove a generalization that extends below t−1 adaptive corruptions.
We present a plausible attack on the full adaptive security of threshold Schnorr signature schemes with public key shares of the form $pk_i = g^{sk_i},$ where all secret keys $sk_i$ lie on a polynomial. We show that a wide range of threshold Schnorr signature schemes, including all variants of FROST, Sparkle, and Lindell’22, cannot be proven fully adaptively secure without modifications or assuming the hardness of a search problem that we define in this work. We then prove a generalization that extends below t−1 adaptive corruptions.
Hongxiao Wang, Ron Steinfeld, Markku-Juhani O. Saarinen, Muhammed F. Esgin, Siu-Ming Yiu
A multi-message multi-recipient Public Key Encryption (mmPKE) enables batch encryption of multiple messages for multiple independent recipients in one go, significantly reducing costs, particularly bandwidth, compared to the trivial solution of encrypting each message individually. This capability is especially critical in the post-quantum setting, where ciphertext length is typically significantly larger than the corresponding plaintext.
In this work, we first observe that the generic construction of mmPKE from reproducible PKE proposed by Bellare et al. (PKC ’03) does not apply in the lattice-based setting because existing lattice-based PKE schemes do not fit the notion of reproducible PKE. To this end, we first extend their construction by proposing a new variant of PKE, named extended reproducible PKE (XR-PKE), which enables the reproduction of ciphertexts via additional hints. However, standard lattice-based PKE schemes, such as Kyber (EuroS&P '18), do not readily satisfy the XR PKE requirements. To construct XR-PKE from lattices, we introduce a novel technique for precisely estimating the impact of such hints on the ciphertext security while also establishing suitable parameters. This enables us to instantiate the first CPA-secure mmPKE and Multi-Key Encapsulation Mechanism (mmKEM) from the standard Module Learning with Errors (MLWE) lattice assumption, named mmCipher-PKE and mmCipher-KEM, respectively. We then extend our works to the identity-based setting and construct the first mmIBE and mmIB-KEM schemes. As a bonus contribution, we explore generic constructions of adaptively secure mmPKE, achieving security against adaptive corruption and chosen-ciphertext attacks.
We also provide an efficient implementation and thorough evaluation of the practical performance of our mmCipher. Our results show that mmCipher provides significant bandwidth and computational savings in practice, compared to the state-of-the-art. For example, for 1024 recipients, our mmCipher-KEM achieves a 23~45 times reduction in bandwidth overhead, reaching within 4~9% of the plaintext length (near optimal bandwidth), while also offering a 3~5 times reduction in computational cost.
In this work, we first observe that the generic construction of mmPKE from reproducible PKE proposed by Bellare et al. (PKC ’03) does not apply in the lattice-based setting because existing lattice-based PKE schemes do not fit the notion of reproducible PKE. To this end, we first extend their construction by proposing a new variant of PKE, named extended reproducible PKE (XR-PKE), which enables the reproduction of ciphertexts via additional hints. However, standard lattice-based PKE schemes, such as Kyber (EuroS&P '18), do not readily satisfy the XR PKE requirements. To construct XR-PKE from lattices, we introduce a novel technique for precisely estimating the impact of such hints on the ciphertext security while also establishing suitable parameters. This enables us to instantiate the first CPA-secure mmPKE and Multi-Key Encapsulation Mechanism (mmKEM) from the standard Module Learning with Errors (MLWE) lattice assumption, named mmCipher-PKE and mmCipher-KEM, respectively. We then extend our works to the identity-based setting and construct the first mmIBE and mmIB-KEM schemes. As a bonus contribution, we explore generic constructions of adaptively secure mmPKE, achieving security against adaptive corruption and chosen-ciphertext attacks.
We also provide an efficient implementation and thorough evaluation of the practical performance of our mmCipher. Our results show that mmCipher provides significant bandwidth and computational savings in practice, compared to the state-of-the-art. For example, for 1024 recipients, our mmCipher-KEM achieves a 23~45 times reduction in bandwidth overhead, reaching within 4~9% of the plaintext length (near optimal bandwidth), while also offering a 3~5 times reduction in computational cost.
Zhengjun Cao, Lihua Liu
We show that the Negi-Kumar certificateless ring signature scheme [Wirel. Pers. Commun. 134(4): 1987-2011 (2024)] is insecure against forgery attack. The signer's public key $PK_j$ and secret key $PSK_j$ are simply invoked to compute the hash value $H_{2_j}=h_5(m_j\|PSK_j\|PK_j\|t_j)$, which cannot be retrieved by the verifier for checking their dependency. The explicit dependency between the public key and secret key is not properly used to construct some intractable problems, such as Elliptic Curve Discrete Logarithm (ECDL), Computational Diffie-Hellman (CDH), and Decisional Diffie-Hellman (DDH). An adversary can find an efficient signing algorithm functionally equivalent to the valid signing algorithm. The findings in this note could be helpful for the newcomers who are not familiar with the designing techniques for certificateless ring signature.
Naman Kumar, Jiayu Xu
A Password-Authenticated Key Exchange (PAKE) protocol allows two parties to jointly establish a cryptographic key, where the only information shared in advance is a low-entropy password. The first efficient PAKE protocol whose security does not rely on the random oracle model is the one by Katz, Ostrovsky and Yung (KOY, EUROCRYPT 2001). Unfortunately, the KOY protocol has only been proven secure in the game-based setting, and it is unclear whether KOY is secure in the stronger Universal Composability (UC) framework, which is the current security standard for PAKE.
In this work, we present a thorough study of the UC-security of KOY. Our contributions are two-fold:
1. We formally prove that the KOY protocol is not UC-secure; 2. We then show that the UC-security of KOY holds in the Algebraic Group Model, under the Decisional Square Diffie-Hellman (DSDH) assumption.
Overall, we characterize the exact conditions under which KOY is UC-secure. Interestingly, the DSDH assumption is stronger than DDH under which KOY can be proven game-based secure, which reveals some subtle gaps between the two PAKE security notions that have never been studied.
In this work, we present a thorough study of the UC-security of KOY. Our contributions are two-fold:
1. We formally prove that the KOY protocol is not UC-secure; 2. We then show that the UC-security of KOY holds in the Algebraic Group Model, under the Decisional Square Diffie-Hellman (DSDH) assumption.
Overall, we characterize the exact conditions under which KOY is UC-secure. Interestingly, the DSDH assumption is stronger than DDH under which KOY can be proven game-based secure, which reveals some subtle gaps between the two PAKE security notions that have never been studied.
Yanxue Jia, Debajyoti Das, Wenhao Zhang, Aniket Kate
While popular messaging apps already offer end-to-end confidentially, end-to-end metadata privacy is still far from being practical. Although several meta-data hiding systems have been developed and some like Tor have been popular, the proposed solutions lack in one or more aspects: the Tor network is prone to easy low-resourced attacks, and most others solely focus on anonymity for senders or receivers but do not both. Some recent solutions do consider end-to-end anonymity, however, they put significant restrictions on how users use the system. Particularly, the receivers must stay online or trust online servers that receive messages on behalf of receivers. This work presents a scalable end-to-end anonymity messaging system, $\mathsf{ORAM}^{-}$, that overcomes the mentioned issues and restrictions. It stems from a key observation that combining the recently-emerged oblivious message retrieval (OMR) primitive with oblivious shuffling can offer the desired end-to-end anonymity without severely restricting the number of messages a sender may send or a receiver may receive. We build our solution using two non-colluding servers and recent OMR protocol HomeRun and a compatible oblivious shuffle protocol. We then extend our solution to allow larger messages by employing a novel two-server distributed oblivious RAM technique, called $\mathsf{ORAM}^{-}$. Our performance analysis demonstrates that with the increase in the number and size of messages, the performance improvement brought by $\mathsf{ORAM}^{-}$ becomes higher. Specifically, for $2^{20}$ messages of size 1KB, our scheme only needs $5.577$ s to transmit a message.
Lucas Piske, Jaspal Singh, Ni Trieu, Vladimir Kolesnikov, Vassilis Zikas
A two-party fuzzy private set intersection (PSI) protocol between Alice and Bob with input sets $A$ and $B$ allows Alice to learn nothing more than the points of Bob that are ``$\delta$-close'' to its points in some metric space $\texttt{dist}$. More formally, Alice learns only the set $\{ b\ |~\texttt{dist}{(a,b)} \leq \delta , a \in A,b\in B\}$ for a predefined threshold $\delta$ and distance metric $\texttt{dist}$, while Bob learns nothing about Alice's set. Fuzzy PSI is a valuable privacy tool in scenarios where private set intersection needs to be computed over imprecise or measurement-based data, such as GPS coordinates or healthcare data. Previous approaches to fuzzy PSI rely on asymmetric cryptographic primitives, generic two-party computation (2PC) techniques like garbled circuits, or function secret sharing methods, all of which are computationally intensive and lead to poor concrete efficiency.
This work introduces a new modular framework for fuzzy PSI, {primarily built on efficient symmetric key primitives}. Our framework reduces the design of efficient fuzzy PSI to a novel variant of oblivious transfer (OT), which we term distance-aware random OT (da-ROT). This variant enables the sender to obtain two random strings $(r_0, r_1)$, while the receiver obtains one of these values $r_b$, depending on whether the receiver’s input keyword $a$ and the sender’s input keyword $b$ are close in some metric space i.e., $\texttt{dist}{(a,b)} \leq \delta$. The da-ROT can be viewed as a natural extension of traditional OT, where the condition (choice bit) is known to the receiver. We propose efficient constructions for da-ROT based on standard OT techniques tailored for small domains, supporting distance metrics such as the Chebyshev norm, the Euclidean norm, and the Manhattan norm.
By integrating these da-ROT constructions, our fuzzy PSI framework achieves up to a $14\times$ reduction in communication cost and up to a $54\times$ reduction in computation cost compared to previous state-of-the-art protocols, across input set sizes ranging from $2^8$ to $2^{16}$. Additionally, we extend our framework to compute fuzzy PSI cardinality and fuzzy join from traditional PSI-related functionalities. All proposed protocols are secure in the semi-honest model.
This work introduces a new modular framework for fuzzy PSI, {primarily built on efficient symmetric key primitives}. Our framework reduces the design of efficient fuzzy PSI to a novel variant of oblivious transfer (OT), which we term distance-aware random OT (da-ROT). This variant enables the sender to obtain two random strings $(r_0, r_1)$, while the receiver obtains one of these values $r_b$, depending on whether the receiver’s input keyword $a$ and the sender’s input keyword $b$ are close in some metric space i.e., $\texttt{dist}{(a,b)} \leq \delta$. The da-ROT can be viewed as a natural extension of traditional OT, where the condition (choice bit) is known to the receiver. We propose efficient constructions for da-ROT based on standard OT techniques tailored for small domains, supporting distance metrics such as the Chebyshev norm, the Euclidean norm, and the Manhattan norm.
By integrating these da-ROT constructions, our fuzzy PSI framework achieves up to a $14\times$ reduction in communication cost and up to a $54\times$ reduction in computation cost compared to previous state-of-the-art protocols, across input set sizes ranging from $2^8$ to $2^{16}$. Additionally, we extend our framework to compute fuzzy PSI cardinality and fuzzy join from traditional PSI-related functionalities. All proposed protocols are secure in the semi-honest model.
Benny Applebaum, Eliran Kachlon
Suppose that we are given a weak \emph{Non-Interactive Zero-Knowledge} (NIZK) proof system for NP with non-negligible soundness and zero-knowledge errors, denoted by $\alpha$ and $\beta$, respectively. Is it possible to to reduce these errors to a negligible level? This problem, known as NIZK amplification, was introduced by Goyal, Jain, and Sahai (Crypto'19) and was further studied by Bitansky and Geier (Crypto'24).
The latter work provides amplification theorems for proofs and arguments, assuming the existence of one-way functions and public-key encryption, respectively. Unfortunately, their results only apply when the security level, $1 - (\alpha + \beta)$, is a constant bounded away from zero. Amplifying NIZK with an inverse polynomial security level remains an open problem and was stated as the main open question in both works.
In this work, we resolve the NIZK amplification problem and show how to amplify any non-trivial NIZK proof system that has a noticeable, inverse-polynomial level of security. As in previous works, we amplify proofs and arguments assuming the existence of one-way functions and public-key encryption, respectively. Furthermore, assuming the existence of collision-resistant hash functions, we preserve, for the first time, properties such as statistical zero-knowledge and proof succinctness.
Our main technical contribution is a new \emph{leakage-resilient secure multiparty} protocol that computes any public-output functionality with information-theoretic security against an adversary that corrupts an arbitrary subset of parties and obtains bounded leakage from each honest party. Our protocol operates in the pairwise correlated randomness model. Previous works relied on stronger setup assumptions in the form of $n$-wise correlations and either supported a smaller corruption threshold or suffered from an exponential dependency on the number of parties. To transform our protocol into a NIZK amplifier, we introduce a new intermediate notion of \emph{leakage-resilient NP secret sharing}, that may be of independent interest.
In this work, we resolve the NIZK amplification problem and show how to amplify any non-trivial NIZK proof system that has a noticeable, inverse-polynomial level of security. As in previous works, we amplify proofs and arguments assuming the existence of one-way functions and public-key encryption, respectively. Furthermore, assuming the existence of collision-resistant hash functions, we preserve, for the first time, properties such as statistical zero-knowledge and proof succinctness.
Our main technical contribution is a new \emph{leakage-resilient secure multiparty} protocol that computes any public-output functionality with information-theoretic security against an adversary that corrupts an arbitrary subset of parties and obtains bounded leakage from each honest party. Our protocol operates in the pairwise correlated randomness model. Previous works relied on stronger setup assumptions in the form of $n$-wise correlations and either supported a smaller corruption threshold or suffered from an exponential dependency on the number of parties. To transform our protocol into a NIZK amplifier, we introduce a new intermediate notion of \emph{leakage-resilient NP secret sharing}, that may be of independent interest.
Wilmar Bolaños, Antti Haavikko, Rodrigo M. Sánchez-Ledesma
This paper proves the RLWE-PLWE equivalence for the maximal real subfields of the cyclotomic fields with conductor $n = 2^r p^s$, where $p$ is an odd prime, and $r \geq 0$ and $s \geq 1$ are integers. In particular, we show that the canonical embedding as a linear transform has a condition number bounded above by a polynomial in $n$. In addition, we describe a fast multiplication algorithm in the ring of integers of these real subfields. The multiplication algorithm uses the fast Discrete Cosine Transform (DCT) and has computational complexity $\mathcal{O}(n \log n)$. This work extends the results of Ahola et al., where the same claims are proved for a single prime $p = 3$.
Pedro Branco, Giulio Malavolta, Zayd Maradni
The lattice isomorphism problem (LIP) asks, given two lattices $\Lambda_0$ and $\Lambda_1$, to decide whether there exists an orthogonal linear map from $\Lambda_0$ to $\Lambda_1$. In this work, we show that the hardness of (a circular variant of) LIP implies the existence of a fully-homomorphic encryption scheme for all classical and quantum circuits. Prior to our work, LIP was only known to imply the existence of basic cryptographic primitives, such as public-key encryption or digital signatures.
Koji Nuida
Private Simultaneous Messages (PSM) is a kind of secure multiparty computation with minimal interaction pattern and minimal security requirement. A PSM protocol is said to be with universal reconstruction for a given function family if the algorithm of the referee (the output party) is independent of a function to be computed and the referee cannot infer the function from a protocol execution. In a recent work by Eriguchi and Shinagawa (EUROCRYPT 2025), they proposed a compiler to obtain a PSM protocol for symmetric functions from PSM protocols with universal reconstruction for symmetric functions with smaller domains. They also constructed the latter PSM protocols with universal reconstruction, by which the former PSM protocol achieves communication complexity better than the previously known protocols. In this paper, we construct the latter PSM protocols with universal reconstruction for symmetric functions more efficiently; the communication complexity is exponentially (in the input range) smaller than the protocols by Eriguchi and Shinagawa. As a consequence, we also obtain a PSM protocol for symmetric functions that is more efficient than their protocol. Technically, a main ingredient of their protocols is a linear and injective encoding of histograms for the input elements, and our improvement is realized by finding a more efficient encoding of the histograms.
Linru Zhang, Xiangning Wang, Jun Jie Sim, Zhicong Huang, Jiahao Zhong, Huaxiong Wang, Pu Duan, Kwok Yan Lam
The advent of Large Language Models (LLM) has brought about a new wave productivity, revolutionizing business operations while keeping cost relatively low. The human-like interface of LLM enables it to be easily integrated with business functions, thereby freeing up precious human resources for more complex, valuable tasks. However, due to the intensive computation and memory requirements of LLM inference, it is preferable and cheaper to deploy LLMs with the Cloud Service Providers (CSP) that offer high performance computation resources and low-latency networking. Nevertheless, privacy concerns have been raised about the possibility of data leakage to the CSP. In this work, we seek to address such privacy concerns through the use of Fully Homomorphic Encryption (FHE). FHE enables the CSP to work on data in its encrypted form, thus ensuring that the data stay private and secure. We propose the implementation of LLM inference with FHE. While a series of prior work have demonstrated that it is possible to execute LLM inference in a private manner, it remains a challenge to design a solution that is practical.
Our contributions are as follows: We provide the first end-to-end open-source implementation of a non-interactive transformer inference with FHE. We report an amortized time of 9.6 minutes of one input with 128 tokens when evaluating the BERT model on CPU. Our packing methods for encrypted matrices remove the need to repack ciphertext between encrypted matrix multiplication and activation layers. Additionally, we introduce interleaved batching to eliminate the internal rotations during ciphertext matrix multiplications. Our approach also avoids HE rotations in evaluations of the softmax and layerNorm, leading to a speedup of 4.22× and 122× than existing works respectively. Our implementation supports arbitrary token lengths, in contrast with existing solutions that requires a full token embedding. Our implementation can be found at GitHub.
Reo Eriguchi, Keitaro Hiwatashi
Secure multiparty computation (MPC) is a cryptographic primitive which enables multiple parties to jointly compute a function without revealing any extra information on their private inputs. Bottleneck complexity is an efficiency measure that captures the load-balancing aspect of MPC protocols, defined as the maximum amount of communication required by any party. In this work, we study the problem of establishing lower bounds on the bottleneck complexity of MPC protocols. While the previously known techniques for lower bounding total communication complexity can also be applied to bottleneck complexity, they do not provide nontrivial bounds in the correlated randomness model, which is commonly assumed by existing protocols achieving low bottleneck complexity, or they are applied only to functions of limited practical interest. We propose several novel techniques for lower bounding the bottleneck complexity of MPC protocols. Our methods derive nontrivial lower bounds even in the correlated randomness model and apply to practically relevant functions including the sum function and threshold functions. Furthermore, our lower bounds demonstrate the optimality of some existing MPC protocols in terms of bottleneck complexity or the amount of correlated randomness.
Reo Eriguchi, Kaoru Kurosawa, Koji Nuida
Multi-server Private Information Retrieval (PIR) is a cryptographic primitive that allows a client to retrieve an item of a database shared by multiple servers without revealing the index. This paper addresses the problem of error correction in multi-server PIR, enabling the client to obtain the item even when some servers provide incorrect responses. In a dishonest-majority setting where the majority of servers may introduce errors, it is known that the client can no longer uniquely determine the correct value. Previous approaches in this setting have typically settled for relaxed guarantees that the client can only reject incorrect values. However, these guarantees are substantially weak, as they only indicate the presence of errors without providing any information about the desired item. In this paper, we explore a more natural alternative called list-decodable PIR, which ensures that the client receives a small list of candidate values one of which is correct. We provide the first formal definition of list-decodable PIR and study its basic properties including a fundamental lower bound on the number of servers and the difficulty of simulation-based security definitions. We propose generic constructions of list-decodable PIR from any semi-honest PIR protocols, each offering different trade-offs. Our constructions improve upon the communication complexity of the only previously known protocol satisfying our definition. Furthermore, they achieve communication complexity comparable to that of the currently best known semi-honest PIR protocols.
Dynamic Security: A Realistic Approach to Adaptive Security With Applications to Strong FaF Security
Bar Alon, Naty Peter
Secure multiparty computation allows multiple parties to jointly compute a function while maintaining security even in the presence of malicious adversaries. There are two types of adversaries in the literature: static adversaries, which choose the parties to corrupt before the protocol begins; and adaptive adversaries, which can corrupt parties during the execution of the protocol based on the messages exchanged by the parties. While adaptive security provides a more robust security guarantee, it may require too much in certain scenarios. Indeed, the adversary must allocate some of its resources to corrupt the parties; however, certain parties might be more susceptible to corruption, for instance, if they have not updated their operating system to the latest version.
To address this, we introduce a new security notion called \emph{dynamic security}. Here, adversaries may corrupt new parties \emph{during and after} the protocol's execution, but \emph{cannot choose} targets based on the messages. A protocol is said to be $(t,h)$-dynamically secure if it is possible to simulate any adversary that can corrupt up to $t$ parties during the execution and $h$ thereafter. Dynamic security provides meaningful security for many real-world scenarios. Moreover, it circumvents known lower bounds on the communication complexity of adaptive security, allowing for more efficient protocols such as committee-based ones, which would be insecure against adaptive adversaries.
We further explore dynamic security and establish the following results. 1. We show a surprising connection between dynamic security and the seemingly unrelated notion of security with friends and foes (FaF security), introduced by Alon et al. (CRYPTO 2020), which aims to protect honest parties not only from adversaries but also against other honest parties. The notion of $(t,h)$-\emph{strong FaF security} strengthens this by requiring the simulatability of the joint view of any $t$ malicious parties alongside any $h$ honest parties to be indistinguishable from their real-world view. We show that $(t,h)$-dynamic security and $(t,h)$-strong FaF security are equivalent.
2. We consider the feasibility of $(t,h)$-dynamic security and show that every $n$-party functionality can be computed with computational $(t,h)$-dynamic security (with guaranteed output delivery) if and only if $2t+h
To address this, we introduce a new security notion called \emph{dynamic security}. Here, adversaries may corrupt new parties \emph{during and after} the protocol's execution, but \emph{cannot choose} targets based on the messages. A protocol is said to be $(t,h)$-dynamically secure if it is possible to simulate any adversary that can corrupt up to $t$ parties during the execution and $h$ thereafter. Dynamic security provides meaningful security for many real-world scenarios. Moreover, it circumvents known lower bounds on the communication complexity of adaptive security, allowing for more efficient protocols such as committee-based ones, which would be insecure against adaptive adversaries.
We further explore dynamic security and establish the following results. 1. We show a surprising connection between dynamic security and the seemingly unrelated notion of security with friends and foes (FaF security), introduced by Alon et al. (CRYPTO 2020), which aims to protect honest parties not only from adversaries but also against other honest parties. The notion of $(t,h)$-\emph{strong FaF security} strengthens this by requiring the simulatability of the joint view of any $t$ malicious parties alongside any $h$ honest parties to be indistinguishable from their real-world view. We show that $(t,h)$-dynamic security and $(t,h)$-strong FaF security are equivalent.
2. We consider the feasibility of $(t,h)$-dynamic security and show that every $n$-party functionality can be computed with computational $(t,h)$-dynamic security (with guaranteed output delivery) if and only if $2t+h
Utkarsh Gupta, Hessam Mahdavifar
Secret sharing is a foundational cryptographic primitive for sharing secret keys in distributed systems. In a classical threshold setting, it involves a dealer who has a secret, a set of $n$ users to whom shares of the secret are sent, and a threshold $t$ which is the minimum number of shares required to recover the secret. These schemes offer an all-or-nothing security approach where less than $t$ shares reveal no information about the secret. But these guarantees are threatened by side-channel attacks which can leak partial information from each share. Initiated by Benhamouda et. al. (CRYPTO'18), the security of such schemes has been studied for precise and worst-case bounded leakage models. However, in practice, side-channel attacks are inherently noisy. In this work, we propose a noisy leakage model for secret sharing, where each share is independently leaked to an adversary corrupted by additive noise in the underlying field $\mathbb{F}_q$. Under this model, we study the security of linear secret sharing schemes, and show bounds on the mutual information (MI) and statistical distance (SD) security metrics. We do this by using the MacWilliams' identity from the theory of error-correcting codes. For a given secret, it enables us to bound the the statistical deviation of the leaked shares from uniform as $\delta^t$, where $\delta$ is the Fourier bias of the added noise. Existing analyses for the security of linear $(n,t)$-threshold schemes only bound the SD metric, and show resilience for schemes with $t \ge 0.668n$. In this work, we show that these constraints are artifacts of the bounded leakage model. In particular, we show that $(n,t)$-threshold schemes over $\mathbb{F}_q$ with $t \ge \tau (n+1)$ leak $\mathcal{O}(q^{-2t(\gamma+1-1/\tau)})$ bits about the secret, given the bias of added noise satisfies $\delta \le q^{-\gamma}$. To the best of our knowledge, this is the first attempt towards understanding the side-channel security of linear secret sharing schemes for the MI metric.
Cong Ling, Laura Luzzi, Hao Yan
The smoothing parameter is a cornerstone concept in lattice-based cryptography. Traditionally defined using the \( L^{\infty} \) distance, this standard formulation can be overly stringent compared to the \( L^1 \) (or statistical) distance more commonly employed in cryptographic contexts. Recent work has proposed relaxed definitions based on Kullback-Leibler (KL) divergence and \( L^1 \) distance, thereby loosening the constraints required for the distance to vanish. However, the additive nature of the \( L^1 \) distance can be limiting for cryptographic applications where probability preservation is essential. In this paper, we introduce the {Rényi smoothing parameter} of a lattice, based on Rényi divergence, to address this limitation. The advantages of Rényi divergence in cryptographic settings are well known thanks to its multiplicative nature. The Rényi smooting parameter provides a tunable framework that interpolates between the \( L^1 \) and \( L^{\infty} \) distances, offering enhanced flexibility. We present two complementary methods to study the averaging behavior of the Rényi flatness factor: one uses classical tools such as the Minkowski-Hlawka ensemble and Rogers’ formula for computing lattice function moments; the other employs Construction A lattices derived from random codes. Finally, we illustrate how this new perspective yields improvements in lattice-based cryptographic constructions.
Pouria Fallahpour, Serge Fehr, Yu-Hsuan Huang
We revisit the quantum security (in the QROM) of digital signature schemes that follow the Fiat-Shamir-with-aborts (FSwA) or the probabilistic hash-and-sign with retry/abort (HSwA) design paradigm. Important examples of such signature schemes are Dilithium, SeaSign, Falcon+ and UOV. In particular, we are interested in the UF-CMA-to-UF-NMA reduction for such schemes. We observe that previous such reductions have a reduction loss that is larger than what one would hope for, or require a more stringent notion of zero-knowledge than one would hope for.
We resolve this matter here by means of a novel UF-CMA-to-UF-NMA reduction that applies to FSwA and HSwA signature schemes simultaneously, and that offers an improved reduction loss (without making the zero-knowledge assumption more stringent).
We resolve this matter here by means of a novel UF-CMA-to-UF-NMA reduction that applies to FSwA and HSwA signature schemes simultaneously, and that offers an improved reduction loss (without making the zero-knowledge assumption more stringent).
Bishwajit Chakraborty, Mridul Nandi, Soumit Pal, Thomas Peyrin, Quan Quan Tan
After more than half a decade since its initiation, NIST declared Ascon as the winner of the LwC competition. In the first public draft of AsconAEAD128, NIST recognized that Ascon has limitations when used in multi-user applications. To mitigate this, NIST prescribed the use of a \(256\)-bit key in multi-user applications and produced an instantiation on how to process this extra key size in the current AsconAEAD128 API. While doing so, they identified a limitation of this new scheme (which we refer to as mu-Ascon in this document): mu-Ascon is vulnerable to committing attack and hence cannot be used in cases where committing security is required. On the other hand, the full key-binding property in Ascon, which separated it from other sponge-type constructions, has been used to show that Ascon is much stronger in the sense that it presents a key recovery resistance even in the case where some intermediate state is recovered. We remark that the current mu-Ascon has the limitation that only a partial key is bound during initialization and finalization. In this work, we propose some alternative instantiations of AsconAEAD128 API for multi-user applications. In comparison with the current mu-Ascon proposal, our first construction Ascon-256.v2 guarantees CMT-4 committing security up to 64 bits, and our second construction Ascon-256.v3 leads to both CMT-4 committing security and full 256-bit key binding. Structurally, our instantiations use only an extra-permutation call to provide these extra security features compared to mu-Ascon, which has a negligible overhead in terms of performance (given the lightweight nature of the Ascon permutation).
Pierre-Alain Jacqmin, Jean Liénardy
Symmetric-key authenticated key establishment (AKE) protocols are particularly well suited in resource constraint environments such as internet of things (IoT) devices. Moreover, they often rely on better understood assumptions than asymmetric ones. In this paper, we review the security model for symmetric-key AKE protocols. We show why several existing models allow trivial attacks while they do not protect against some non-trivial ones. We fix these issues with our new security definitions.
We show that the protocols $\textrm{LP2}$ and $\textrm{LP3}$ of Boyd et al. do not satisfy the claimed security properties. We propose a new 2-message protocol based on them, called $\textrm{LP2+}$. This protocol is proved to satisfy correctness, weak synchronization robustness, entity authentication, key indistinguishability and, as a consequence, it admits perfect forward secrecy. An instantiation of $\textrm{LP2+}$ is presented, whose security only relies on that of a pseudo-random function (PRF). Its total execution time in normal cases is dominated by only 14 evaluations of the PRF, making it a lightweight protocol that is particularly well suited for resource-constrained environments such as IoT devices.
The flaws found in the security models as well as in the security arguments could have been avoided with precise and detailed proofs. We thus take this paper as an opportunity to advocate for thorough security proofs. Therefore, we have made the choice of rigor over concision.
We show that the protocols $\textrm{LP2}$ and $\textrm{LP3}$ of Boyd et al. do not satisfy the claimed security properties. We propose a new 2-message protocol based on them, called $\textrm{LP2+}$. This protocol is proved to satisfy correctness, weak synchronization robustness, entity authentication, key indistinguishability and, as a consequence, it admits perfect forward secrecy. An instantiation of $\textrm{LP2+}$ is presented, whose security only relies on that of a pseudo-random function (PRF). Its total execution time in normal cases is dominated by only 14 evaluations of the PRF, making it a lightweight protocol that is particularly well suited for resource-constrained environments such as IoT devices.
The flaws found in the security models as well as in the security arguments could have been avoided with precise and detailed proofs. We thus take this paper as an opportunity to advocate for thorough security proofs. Therefore, we have made the choice of rigor over concision.