IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
03 September 2025
Sayatan Ganguly, Shion Samadder Chaudhury
Several subscription-based systems require fine-grained,
dynamic access control, which traditional broadcast
encryption schemes fail to handle efficiently. Attribute-based
and policy-based access structures offer a flexible and scalable
solution by encoding access rules directly into the encryption
mechanism. With the advent and development of quantum
networks, attribute-based and policy-based quantum broadcast
encryption (PBQBE) becomes necessary for enforcing such
controls securely in a post-quantum world. In this paper, we
present a general construction of quantum broadcast encryption
for multiple messages simultaneously, based on routing and
traversing a given attribute-based and, consequently, a policy-based
access tree when a user satisfies a unique policy. Our
scheme can handle dynamic systems with frequently changing
access patterns and can be deployed seamlessly with classical
systems. Compared to existing constructions, our scheme offers
hierarchical access, policy control, scalability, and revocation
while maintaining provable security guarantees.
How Hard Can It Be to Formalize a Proof? Lessons from Formalizing CryptoBox Three Times in EasyCrypt
François Dupressoir, Andreas Hülsing, Cameron Low, Matthias Meijers, Charlotte Mylog, Sabine Oechsner
Provable security is a cornerstone of modern cryptography, aiming to provide formal and precise security guarantees. However, for various reasons, security proofs are not always properly verified, possibly leading to unwarranted security claims and, in the worst case, deployment of insecure constructions. To further enhance trust and assurance, machine-checked cryptography makes these proofs more formal and rigorous. Unfortunately, the complexity of writing machine-verifiable proofs remains prohibitively high in many interesting use-cases. In this paper, we investigate the sources of this complexity, specifically examining how the style of security definitions influences the difficulty of constructing machine-verifiable proofs in the context of game-playing security.
Concretely, we present a new security proof for the generic construction of a PKAE scheme from a NIKE and AE scheme, written in a code-based, game-playing style à la Bellare and Rogaway, and compare it to the same proof written in the style of state-separating proofs, a methodology for developing modular game-playing security proofs. Additionally, we explore a third “blended” style designed to avoid anticipated difficulties with the formalization. Our findings suggest that the choice of definition style impacts proof complexity—including, we argue, in detailed pen-and-paper proofs—with trade-offs depending on the proof writer’s goals.
Concretely, we present a new security proof for the generic construction of a PKAE scheme from a NIKE and AE scheme, written in a code-based, game-playing style à la Bellare and Rogaway, and compare it to the same proof written in the style of state-separating proofs, a methodology for developing modular game-playing security proofs. Additionally, we explore a third “blended” style designed to avoid anticipated difficulties with the formalization. Our findings suggest that the choice of definition style impacts proof complexity—including, we argue, in detailed pen-and-paper proofs—with trade-offs depending on the proof writer’s goals.
Tsai Yi-Ju
This paper addresses the question of how many distinct Montgomery curves exist over a finite field $\mathbb{F}_p$ for a given order. We provide a complete answer by presenting precise counting formulas from two perspectives: (1) the number of $\mathbb{F}_p$-isomorphism classes, and (2) the number of coefficient pairs $(A,B)$. Additionally, we propose conjectural formulas that approximate the probability of a randomly chosen curve having an order with a specific, cryptographically relevant structure. As a byproduct of our methods, we also establish a novel identity for the Kronecker-Hurwitz class number.
Feixiang Zhao
Homomorphic attribute-based encryption (HABE) is a useful cryptographic primitive that supports both fine-grained access control and computation over ciphertexts. However, existing HABE schemes are limited to the homomorphic evaluation of circuits with either bounded depth or a restricted number of inputs. To address this problem, we introduce a bootstrappable, fully homomorphic attribute-based encryption (FHABE) scheme that supports computations of circuits with unbounded depth over cross-attribute ciphertexts. Compared to state-of-the-art schemes, the proposed scheme also has a more lightweight ciphertext and eliminates the reliance on the random oracle model. Additionally, we extend the FHABE to a proxy re-encryption setting, proposing an attribute-based homomorphic proxy re-encryption scheme that facilitates efficient sharing of encrypted data. This scheme is the first lattice-based multi-hop attribute-based proxy re-encryption scheme with unbounded hops of re-encryptions, which may be of independent interest.
Sebastian Faller, Guilhem Niot, Michael Reichle
Blind signatures are a central tool for privacy-preserving protocols. They allow users to obtain signatures from a signer without the signer seeing the signed message. For instance, it enables electronic cash: signatures correspond to coins which can be issued by the bank in a privacy-preserving manner via blind signing. To mitigate the risk of key compromise, threshold blind signatures allow the distribution of the signing key amongst N parties. While recent works have focused on improving the security of this primitive in the classical setting, no construction is known to date in the post-quantum setting. We present the first construction of a threshold blind signature secure in the post-quantum setting, based on lattices. We prove its security under an interactive variant of the SIS assumption introduced in [Agrawal et al., CCS’22]. Our construction has a reasonable overhead of a factor of roughly 1.4 X to 2.5 X in signature size over comparable non-threshold blind signatures over lattices under heuristic but natural assumptions.
Karla Friedrichs, Anja Lehmann, Cavit Özbay
Oblivious pseudorandom functions (OPRFs) allow the blind evaluation of a pseudorandom function, which makes them a versatile building block that enjoys usage in numerous applications. So far, security of OPRFs is predominantly captured in the Universal Composability (UC) framework, where an ideal functionality covers the expected security and privacy properties. While the OPRF functionality appears intuitive at first, the ideal-world paradigm also comes with a number of challenges: from imposing idealized building blocks when building OPRFs, to the lack of modularity, and requiring intricate UC knowledge to securely maneuver their usage. Game-based definitions are a simpler way to cover security properties. They model each property in a single game, which grants modularity in formalizing, proving, and using OPRFs. Interestingly, the few game-based works on OPRFs each re-invent the security model, with considerable variation. Thus, the advantages of the game-based approach remain out of reach: definitions are not easily accessible and comparability across works is low. In this work, we therefore systematize all existing notions into a clear, hierarchical framework. We unify or separate properties, making hidden relations explicit. This effort reveals the necessity of two novel properties: an intermediate privacy notion and a stronger unpredictability notion. Finally, we analyze the two most prominent constructions in our framework: HashDH and 2HashDH. The former does not achieve UC security, but has advantages in applications that require key rotation or updatability; yet it lacks a security analysis. We show that it achieves most security properties in our framework. We also observe that HashDH and 2HashDH do not satisfy our strongest privacy notion, indicating that the guarantees by the UC functionality are not as well understood as we might expect them to be. Overall, we hope that our framework facilitates the usage and design of OPRFs.
Aleck Nash, Christian Eduardo Terron Garcia, Henry Chimal-Dzul, Kim-Kwang Raymond Choo
Consensus protocols are an important building block in blockchain and blockchain-based systems. The recent focus in developing practical quantum computers reinforces the importance of designing quantum-resistant cryptographic protocols, and in the context of this paper quantum-resistant consensus protocols. In this paper, we systematically review the extant literature on quantum-resistant consensus protocols published between 2019 and 2024. As part of
the review, we identify a number of limitations and challenges and discuss potential future research directions.
Taehun Kang, Dongheo Heo, Jeonghwan Lee, Suhri Kim, Changmin Lee
Since supersingular isogeny Diffie-Hellman (SIDH) was broken by a polynomial-time attack, several countermeasures were proposed. Among them, terSIDH has been highlighted for its high performance, yet it exposes a side-channel vulnerability. The total isogeny degree depends on the private key, causing variation in isogeny computation times. This dependency makes terSIDH susceptible to timing attacks. The ratio between the worst- and the best-case execution times of terSIDH was about 32.67 times. In this work, we propose the first constant-time terSIDH. By adding dummy operations, we reduce the dependency of isogeny computations on the private key, ensuring that the algorithm’s execution time remains constant regardless of the private key. Since such countermeasures are generally slower than their non-constant-time counterparts, we applied additional optimizations to mitigate this overhead. Moreover, we present the first C implementation of terSIDH having 128-bit security. We compared our method against CSIDH-4096, a representative isogeny-based key exchange protocol with the same security level. For key generation, the worst-case of terSIDH, CSIDH-4096, and our constant-time method take 1.77 s, 5.18 s, and 1.58 s, respectively. These results demonstrate that our proposed constant-time method is faster than the worst-case of terSIDH while also being significantly more efficient than CSIDH-4096.
Manuel Barbosa, Matthias J. Kannwischer, Thing-han Lim, Peter Schwabe, Pierre-Yves Strub
Decryption errors play a crucial role in the security of KEMs based on Fujisaki-Okamoto because the concrete security guarantees provided by this transformation directly depend on the probability of such an event being bounded by a small real number. In this paper we present an approach to formally verify the claims of statistical probabilistic bounds for incorrect decryption in lattice-based KEM constructions. Our main motivating example is the PKE encryption scheme underlying ML-KEM. We formalize the statistical event that is used in the literature to heuristically approximate ML-KEM decryption errors and confirm that the upper bounds given in the literature for this event are correct. We consider FrodoKEM as an additional example, to demonstrate the wider applicability of the approach and the verification of a correctness bound without heuristic approximations. We also discuss other (non-approximate) approaches to bounding the probability of ML-KEM decryption.
Maria Leslie, Ratna Dutta
In a t-out-of-n threshold secret sharing scheme, accountability is crucial when a subset of f < t servers collude to leak secret shares. Traceable Threshold Secret Sharing (TTSS) ensures that leaked shares can be traced back to the compromised servers while preventing false accusations through non-imputability. In Crypto’24, Boneh et al. proposed new definitions and more practical constructions for TTSS based on Shamir’s and Blakley’s secret sharing schemes, removing the practical limitation of existing TTSS.
Our work presents a traceable secret sharing scheme built upon an additive variant of the Asmuth-Bloom scheme, relying only on black-box access to the reconstruction box R. In our model, a subset of f < t colluding servers can construct a reconstruction box R that recovers the secret with the assistance of an additional t − f random shares. We note that integrating tracing in the standard (t, n)-Asmuth-Bloom Secret Sharing (ABSS) scheme exhibits a tracing leakage issue. We fix this limitation by introducing additive variants of ABSS, ABSS-I and ABSS-II that retain the security of the original scheme ABSS while splitting the secret s into t additive components and generating all shares from the additive components of s. Based on ABSS-I, we construct a TTSS scheme, TTSS-I, that introduces traceability into the framework and is proven to be universally traceable in the random oracle model, assuming R is a universally good reconstruction box. We integrate a tracing mechanism in ABSS-II and propose a second scheme, TTSS-II, which extends TTSS-I by additionally concealing partial information about the additive component of the secret s to introduce non-imputability to prevent the tracer from falsely accusing any honest party by fabricating evidence of its corruption. The security of TTSS-II is also in the random oracle model and relies on the hardness of the discrete logarithm problem.
Our work presents a traceable secret sharing scheme built upon an additive variant of the Asmuth-Bloom scheme, relying only on black-box access to the reconstruction box R. In our model, a subset of f < t colluding servers can construct a reconstruction box R that recovers the secret with the assistance of an additional t − f random shares. We note that integrating tracing in the standard (t, n)-Asmuth-Bloom Secret Sharing (ABSS) scheme exhibits a tracing leakage issue. We fix this limitation by introducing additive variants of ABSS, ABSS-I and ABSS-II that retain the security of the original scheme ABSS while splitting the secret s into t additive components and generating all shares from the additive components of s. Based on ABSS-I, we construct a TTSS scheme, TTSS-I, that introduces traceability into the framework and is proven to be universally traceable in the random oracle model, assuming R is a universally good reconstruction box. We integrate a tracing mechanism in ABSS-II and propose a second scheme, TTSS-II, which extends TTSS-I by additionally concealing partial information about the additive component of the secret s to introduce non-imputability to prevent the tracer from falsely accusing any honest party by fabricating evidence of its corruption. The security of TTSS-II is also in the random oracle model and relies on the hardness of the discrete logarithm problem.
Yuhang Zeng, Zhixin Dong, Xian Xu
HotStuff has gained widespread application in scenarios such as consortium chains in recent years due to its linear view change and pipelined decision making mechanisms. Although there have been studies on the performance of this algorithm, there remains a lack of analysis and formal termination proofs regarding its composability. This paper, for the first time, constructs a comprehensive formal system for the HotStuff protocol in a partially synchronous network environment under the Universally Composable (UC) framework and proves the termination of the HotStuff protocol within the UC framework.
Specifically, we establish the ideal function and demonstrate that the HotStuff protocol UC realizes this ideal function, which guarantees the indistinguishability between the real protocol of HotStuff and the ideal function.
Notably, in the context of network delay attacks, this paper uses a phased time analysis method with an exponential backoff timeout mechanism to show that the protocol can still achieve consensus within a bounded amount of time.
The results of this work not only establish a formal proof paradigm for analyzing termination of BFT protocols in a composable framework, but also provide important theoretical foundations for ensuring reliable termination in industrial-grade blockchain systems (such as the Meta Diem project and Chang’an Chain that employ HotStuff).
Michel Seck, Abdoul Aziz Ciss
Recently, Cotan and Teseleanu (NordSec 2023) published an RSA-like cryptosystem. While in RSA, the public exponent $e$ and the private exponent $d$ are related by the equation $ed - k(p-1)(q-1) = 1$, in their scheme, $e$ and $d$ are related to the equation $ed - k(p^n-1)(q^n-1) = 1$ for some positive integer $n$. Teseleanu (CSCML 2024) showed that one can factor the modulus $N$ using a lattice attack if $d$ is small. In this paper, we extend his attack by showing that if the private exponent is either too small or too large, one can factor $N$ in polynomial time by solving the generalized equation $eu - (p^n - 1)(q^n - 1)v = \pm 1$ using lattice reduction techniques.
Hamza Abusalah, Gaspard Anthoine, Gennaro Avitabile, Emanuele Giunta
We study the inherent limitations of additive accumulators and updatable vector commitments (VCs) with constant-size digest (i.e., independent of the number of committed elements).
Specifically, we prove two lower bounds on the expected number of membership proofs that must be updated when a \emph{single} element is added (or updated) in such data structures. Our results imply that when the digest bit length approaches the concrete security level, then the expected number of proofs invalidated due to an append operation for a digest committing to $n$ elements is nearly maximal: $n-\mathsf{negl}(\lambda)$ in the case of exponential-size universes, and $n-o(n)$ for super-polynomial universes. Our results have significant implications for stateless blockchain designs relying on constant-size VCs, suggesting that the overhead of frequent proof updates may offset the benefits of reducing global state storage.
Specifically, we prove two lower bounds on the expected number of membership proofs that must be updated when a \emph{single} element is added (or updated) in such data structures. Our results imply that when the digest bit length approaches the concrete security level, then the expected number of proofs invalidated due to an append operation for a digest committing to $n$ elements is nearly maximal: $n-\mathsf{negl}(\lambda)$ in the case of exponential-size universes, and $n-o(n)$ for super-polynomial universes. Our results have significant implications for stateless blockchain designs relying on constant-size VCs, suggesting that the overhead of frequent proof updates may offset the benefits of reducing global state storage.
Anasuya Acharya, Carmit Hazay, Muthuramakrishnan Venkitasubramaniam
The notion of Best-of-Both-Worlds introduced in the work of Ishai et al. (CRYPTO 2006) investigated whether an MPC protocol can simultaneously provide two incomparable security guarantees: guaranteed output delivery against an honest majority and security with abort against a dishonest majority and provided tight upper and lower bounds in the presence of computationally bounded, i.e., PPT adversaries. Another line of works starting from the work of Chaum (CRYPTO 1989) considered protocols that simultaneously achieved security against an unbounded adversary corrupting a minority of the parties and security against arbitrary corruption by a PPT adversary.
In this work, we generalize previous work to investigate a fundamental challenge of designing an MPC in a multiverse where security is specified with respect to (1) GOD, (2) fairness, (3) security w.r.t. unbounded adversaries, and (4) security with abort. The work of Lucas et al. (PODC 2010) resolved this question when considering threshold adversaries; however, the case of general adversary structures remains open. Our main result completely characterizes when a protocol can simultaneously achieve all properties. Namely, given adversary structures $Z_{\mathsf{GOD}}, Z_{fair}, Z_{S}$ and $Z_{C}$, we provide tight upper and lower bounds for when an MPC protocol can provide GOD, fairness, and security with abort respectively for unbounded and PPT adversaries w.r.t. these adversary structures.
In this work, we generalize previous work to investigate a fundamental challenge of designing an MPC in a multiverse where security is specified with respect to (1) GOD, (2) fairness, (3) security w.r.t. unbounded adversaries, and (4) security with abort. The work of Lucas et al. (PODC 2010) resolved this question when considering threshold adversaries; however, the case of general adversary structures remains open. Our main result completely characterizes when a protocol can simultaneously achieve all properties. Namely, given adversary structures $Z_{\mathsf{GOD}}, Z_{fair}, Z_{S}$ and $Z_{C}$, we provide tight upper and lower bounds for when an MPC protocol can provide GOD, fairness, and security with abort respectively for unbounded and PPT adversaries w.r.t. these adversary structures.
Wei Ao, Vishnu Naresh Boddeti
Face recognition is central to many authentication, security, and personalized applications. Yet, it suffers from significant privacy risks, particularly arising from unauthorized access to sensitive biometric data. This paper introduces CryptoFace, the first end-to-end encrypted face recognition system with fully homomorphic encryption (FHE). It enables secure processing of facial data across all stages of a face-recognition process—feature extraction, storage, and matching—without exposing raw images or features. We introduce a mixture of shallow patch convolutional networks to support higher-dimensional tensors via patch-based processing while reducing the multiplicative depth and, thus, inference latency. Parallel FHE evaluation of these networks ensures near-resolution-independent latency. On standard face recognition benchmarks, CryptoFace significantly accelerates inference and increases verification accuracy compared to the state-of-the-art FHE neural networks adapted for face recognition. CryptoFace will facilitate secure face recognition systems requiring robust and provable security. The code is available at https://github.com/human-analysis/CryptoFace.
Ashish Choudhury, Ivan Damgård, Shravani Patil, Arpita Patra
We study the feasibility of constant communication-overhead information-theoretic Multiparty Computation tolerating malicious adversaries in both the synchronous and asynchronous network settings. A simple observation based on the results of Damg\aa rd, Larsen and Neilsen (CRYPTO 2019) allows us to conclude that (a) any statistically-secure MPC with $n = 2t+s$, $s > 0$ requires a communication complexity of $\Theta(\frac{n|C|}{s})$ bits in the synchronous setting; (b) any statistically-secure MPC with $n = 3t+s$, $s > 0$ requires a communication complexity of $\Theta(\frac{n|C|}{s})$ bits in the asynchronous setting; where $C$ denotes the circuit, $n$ denotes the number of parties and $t$ denotes the number of corruptions. Both the results hold even for abort security.
To match the above bounds for $s = \Theta(t)$, we propose two protocols in the synchronous and respectively in the asynchronous setting, both with guaranteed output delivery and a communication complexity of $O(|C|\kappa)$ bits, for a statistical security parameter $\kappa$, a running time of $O(D)$ for circuit depth $D$ (in expectancy in the asynchronous setting). For this, we require circuits with a SIMD structure. Our constructs are the first constant-overhead statistically-secure MPC protocols with optimal resilience.
Our first technical contributions is a verifiable secret sharing (VSS) protocol with constant-overhead per secret through two-dimensional packing. A second contribution is a degree-reduction (from a higher degree packed sharing to a lower degree packed sharing) protocol with constant overhead.
As a bonus, a simplified version of our statistical MPC protocols, plugged with the VSSs derived from the earlier works of Abraham, Asharov, Patil and Patra (EUROCRYPT 2023 and 2024) allow us to obtain perfectly-secure MPC protocols with a communication cost of $O(|C|\kappa)$ bits, where $\kappa = \Theta(\log(n))$, and a running time of $O(D)$ with the resiliency of $n = 3t+\Theta(t)$ in synchronous setting and $n=4t+\Theta(t)$ in the asynchronous setting. These resiliency are optimal in perfect setting due to the lower bounds of Damg\aa rd, Patil, Patra, Roy (Eprint 2025) even for abort security.
To match the above bounds for $s = \Theta(t)$, we propose two protocols in the synchronous and respectively in the asynchronous setting, both with guaranteed output delivery and a communication complexity of $O(|C|\kappa)$ bits, for a statistical security parameter $\kappa$, a running time of $O(D)$ for circuit depth $D$ (in expectancy in the asynchronous setting). For this, we require circuits with a SIMD structure. Our constructs are the first constant-overhead statistically-secure MPC protocols with optimal resilience.
Our first technical contributions is a verifiable secret sharing (VSS) protocol with constant-overhead per secret through two-dimensional packing. A second contribution is a degree-reduction (from a higher degree packed sharing to a lower degree packed sharing) protocol with constant overhead.
As a bonus, a simplified version of our statistical MPC protocols, plugged with the VSSs derived from the earlier works of Abraham, Asharov, Patil and Patra (EUROCRYPT 2023 and 2024) allow us to obtain perfectly-secure MPC protocols with a communication cost of $O(|C|\kappa)$ bits, where $\kappa = \Theta(\log(n))$, and a running time of $O(D)$ with the resiliency of $n = 3t+\Theta(t)$ in synchronous setting and $n=4t+\Theta(t)$ in the asynchronous setting. These resiliency are optimal in perfect setting due to the lower bounds of Damg\aa rd, Patil, Patra, Roy (Eprint 2025) even for abort security.
Chenke Wang, Yu Long, Xian Xu, Shi-Feng Sun, Yiqi Liu, Dawu Gu
Cross-chain payment technologies have obtained broad affirmation from industry and academia as they enable assets to be circulated across the boundaries of various blockchains. However, existing cross-chain payment protocols are tailored for limited blockchains, inflexible in providing privacy guarantees, and unsatisfactory in scalability.
To address these issues, this paper proposes a universal cross-chain payment framework. This framework enables payments across a wide range of blockchains since it is independent of any specific blockchain features. Moreover, this framework provides on-demand privacy and high scalability. To instantiate the framework, we introduce $\mathsf{UniCross}$, a novel universal cross-chain payment protocol. Concretely, we utilize the ring learning with errors (RLWE)-based encryption scheme and propose a new non-interactive zero-knowledge (NIZK) protocol, named $\mathsf{HybridProof}$, to construct $\mathsf{UniCross}$. We formally define the security of the universal cross-chain payment framework and prove the universal composability (UC) security of $\mathsf{UniCross}$. The proof-of-concept implementation and evaluation demonstrate that (1) $\mathsf{UniCross}$ consumes up to 78\% and 94\% less communication and computation cost than the state-of-the-art work; (2) $\mathsf{UniCross}$ achieves a throughput ($\sim$360 tps) 36$\times$ that of the state-of-the-art work ($\sim$10 tps).
To address these issues, this paper proposes a universal cross-chain payment framework. This framework enables payments across a wide range of blockchains since it is independent of any specific blockchain features. Moreover, this framework provides on-demand privacy and high scalability. To instantiate the framework, we introduce $\mathsf{UniCross}$, a novel universal cross-chain payment protocol. Concretely, we utilize the ring learning with errors (RLWE)-based encryption scheme and propose a new non-interactive zero-knowledge (NIZK) protocol, named $\mathsf{HybridProof}$, to construct $\mathsf{UniCross}$. We formally define the security of the universal cross-chain payment framework and prove the universal composability (UC) security of $\mathsf{UniCross}$. The proof-of-concept implementation and evaluation demonstrate that (1) $\mathsf{UniCross}$ consumes up to 78\% and 94\% less communication and computation cost than the state-of-the-art work; (2) $\mathsf{UniCross}$ achieves a throughput ($\sim$360 tps) 36$\times$ that of the state-of-the-art work ($\sim$10 tps).
Anne Canteaut, Merlin Fruchon
Many design strategies and differential attacks rely on the so-called hypothesis of stochastic equivalence, which states that the differential behaviour of a cipher for any fixed key can be approximated by its average behaviour over all keys. However, this assumption is known to be invalid in general. For instance, all two-round differential characteristics of AES are plateau, meaning that their probabilities highly depends on the key. While such discrepancies were traditionally expected to occur only for a small number of rounds, several counter-examples have been exhibited recently thanks to the quasi-differential framework.
This paper aims to show why the divergence between fixed-key and average behaviour can be even more pronounced and is definitely harder to anticipate when analyzing differentials, rather than individual characteristics, which is the quantity actually relevant in differential cryptanalysis. Indeed, when a differential aggregates many characteristics that do not satisfy the hypothesis of stochastic equivalence, several scenarios may arise: the sets of weak keys may be identical across all characteristics, resulting in a strong clustering effect, or entirely disjoint, eliminating any such effect. Focusing on (truncated) differentials that include plateau characteristics, we demonstrate how this mechanism explains the unexpected differential behaviour observed in Midori64 and Scream over an arbitrary number of rounds, as recently reported by Baudrin et al. We also clarify how this situation differs from invariant subspaces, except in a few specific cases. Furthermore, we identify the properties of the Sbox that give rise to this weakness and all vulnerable Sboxes among the optimal 4-bit functions.
This paper aims to show why the divergence between fixed-key and average behaviour can be even more pronounced and is definitely harder to anticipate when analyzing differentials, rather than individual characteristics, which is the quantity actually relevant in differential cryptanalysis. Indeed, when a differential aggregates many characteristics that do not satisfy the hypothesis of stochastic equivalence, several scenarios may arise: the sets of weak keys may be identical across all characteristics, resulting in a strong clustering effect, or entirely disjoint, eliminating any such effect. Focusing on (truncated) differentials that include plateau characteristics, we demonstrate how this mechanism explains the unexpected differential behaviour observed in Midori64 and Scream over an arbitrary number of rounds, as recently reported by Baudrin et al. We also clarify how this situation differs from invariant subspaces, except in a few specific cases. Furthermore, we identify the properties of the Sbox that give rise to this weakness and all vulnerable Sboxes among the optimal 4-bit functions.
Patrick Derbez, Marie Euler
This paper introduces a new MILP modeling to find impossible differential (ID) distinguishers and attacks. Standard models for ID are negative models, in the sense that a differential is impossible if and only if the model has no solution. Our new modelling technique focuses on probable ID, differentials that are probably impossible. While this might lead to false positives, the main advantage is that searching for such probable ID can be achieved through a positive model. This facilitates the search for the best impossible differential attacks without first exhausting all possible ID distinguishers on a target. We also propose to simplify the modelling by only considering two possible states for internal cells: inactive and unknown. In this case there are no longer direct contradictions but only indirect ones, assuming that it is impossible that all cells are inactive.
With these two simple ideas, we are able to retrieve the longest impossible differentials distinguishers on MIDORI, SKINNY, PRESENT, SIMON, Simeck and SPECK. Furthermore, as the model looking for candidates is based on satisfiability, it can be incorporated in a larger model which looks directly for the best attacks in order to enumerate the distinguishers in the order of the complexity of the associated attacks, which we did for the AES, ARADI, SIMON and SKINNY.
M&M: Secure Two-Party Machine Learning through Efficient Modulus Conversion and Mixed-Mode Protocols
Ye Dong, Wen-jie Lu, Xiaoyao Hou, Kang Yang, Jian Liu
Secure two-party machine learning has made substantial progress through the use of mixed-mode protocols. Despite these advancements, existing approaches often suffer from efficiency bottlenecks due to the inherent mismatch between the optimal domains of various cryptographic primitives, such as Homomorphic Encryption and Oblivious Transfer. In response to these challenges, we introduce the \tNAME{} framework, which features an efficient modulus conversion protocol. This breakthrough enables seamless integration of the most suitable cryptographic subprotocols within their optimal modulus domains, allowing linear computations to be executed over a prime modulus and nonlinear computations over a two-power modulus, with a minimal modulus conversion overhead. We further establish new benchmarks for the performance of fundamental primitives, namely comparison and multiplication, across various two-party techniques, together with our practical optimizations to improve efficiency. By incorporating these techniques, \tNAME{} demonstrates significant performance enhancements over state-of-the-art solutions: \romannumeral1) we report a $6\times$-$100\times$ improvement for approximated truncations with 1-bit error tolerance; \romannumeral2) an average of $5\times$ (resp. $4\times$) reduction in communication (resp. runtime) for machine learning functions; \romannumeral3) and a 25\%-99\% improvement in cost-efficiency for private inference of deep neural networks and 50\% improvement in private training of gradient boosting decision trees.