IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
13 December 2025
Angelo De Caro, Kaoutar Elkhiyaoui, Sandeep Nishad, Sikhar Patranabis, Venkatraman Ramakrishna
Interoperation across distributed ledger technology (DLT) networks hinges upon the secure transmission of ledger state from one network to another. This is especially challenging for private networks whose ledger access is limited to enrolled members. Existing approaches rely on a trusted centralized proxy that receives encrypted ledger state of a network, decrypts it, and sends it to members of another network. Though effective, this approach goes against the founding principle of DLT, namely avoiding single points of failure (or single sources of trust).
In this paper, we leverage fully-distributed broadcast encryption (FDBE in short) to build a fully decentralized protocol for confidential information-sharing across private networks. Compared to traditional broadcast encryption (BE), FDBE is characterized by distributed setup and key generation, where mutually distrusting parties agree on a BE’s public key without a trusted setup, and securely derive their decryption keys. Given any FDBE, two private networks can securely share information as follows: a sender in one network uses the other network’s FDBE public key to encrypt a message for its members; and the resulting construction is secure in the simplified universal composability framework.
To further demonstrate the practicality of our approach, we present the first instantiation of an FDBE that enjoys constant-sized decryption keys and ciphertexts, and evaluate the resulting performances through a reference implementation that considers two private Hyperledger Fabric networks within the Hyperledger Cacti interoperation framework.
In this paper, we leverage fully-distributed broadcast encryption (FDBE in short) to build a fully decentralized protocol for confidential information-sharing across private networks. Compared to traditional broadcast encryption (BE), FDBE is characterized by distributed setup and key generation, where mutually distrusting parties agree on a BE’s public key without a trusted setup, and securely derive their decryption keys. Given any FDBE, two private networks can securely share information as follows: a sender in one network uses the other network’s FDBE public key to encrypt a message for its members; and the resulting construction is secure in the simplified universal composability framework.
To further demonstrate the practicality of our approach, we present the first instantiation of an FDBE that enjoys constant-sized decryption keys and ciphertexts, and evaluate the resulting performances through a reference implementation that considers two private Hyperledger Fabric networks within the Hyperledger Cacti interoperation framework.
Zhen Qin, Siwei Sun
The SPHINCS+ framework provides the underlying architecture for modern quantum resistant stateless hash-based signatures. Notable examples include the NIST standard SLH-DSA and its recent variants such as SPHINCS-$\alpha$ and SPHINCS+C. We extend the hypertree structure that underlies the SPHINCS+ framework by allowing trees of different heights to appear on different layers, and we plug generalized hash-based one-time signatures with chains of different lengths into the hypertree. While these structural generalizations do not affect the original security proof for the SPHINCS+ framework as long as the encoding function employed by the underlying one-time signature is injective and incomparable, they lead to enlarged design space, opening up the possibility for finer-grained trade-offs. We perform a systematic exploration of the parameter space for the generalized structure guided by a thorough theoretical cost analysis that minimizes the number of variables to be enumerated in the searching process. As a result, we identify many parameter sets superior to state-of-the-art stateless hash-based signature schemes in terms of signature size, signing or verification efficiency. In particular, we provide some parameter settings not only enjoying smaller signature size, but also more efficient in signing and verification. The improvement can be significant if we do not pursue optimizing all performance metrics simultaneously. One of our constructions with 128-bit security is 8.1% smaller than SPHINCS+C-128s (26.2% smaller than SPHINCS+-128s and 16.7% smaller than SPHINCS-$\alpha$-128s). At the same time, it is faster in verification but slower in signing than SPHINCS+C-128s. Further size reduction is possible with a greater sacrifice in speed. We provide implementations and benchmark results for representative parameter sets.
Mila Anastasova, Panos Kampanakis
Migrating to quantum-resistant cryptographic algorithms, specifically the NIST-standardized Module Learning with Errors (MLWE) primitives, would inevitably result in data transmission overhead in secure transport protocols due to their larger key, ciphertext, and signature sizes. Would the connection setup cost noticeably affect application performance? This study evaluates MLWE's performance impact on practical use cases that rely on TLS 1.3 via real-world experiments. We analyze three distinct scenarios by sharing empirical and experimental data of applications interfacing with cloud service TLS endpoints, Web user metrics, and mutual TLS connections. We argue that some cloud applications will not be significantly affected due to their unconstrained environment. We show that Web performance degradation will remain below 10% for common webpages, corresponding to time delays of under 100ms, which users are unlikely to perceive. For mutual TLS applications, our experiments show that MLWE noticeably affects Time-to-First-Byte, almost doubling the connection times compared to plain TLS. However, when evaluating Time-to-Last-Byte, a metric more closely tied to application performance, the overall impact drops to about 15% for ~150KB data transfers in fast or slow networks. This impact is much lower for large client-server round trips. While these results are reassuring that MLWE could unnoticeably be introduced in common TLS use cases, they do not diminish the value of data trimming techniques proposed in the literature (e.g., session resumption, intermediate certificate authority suppression) to speed up connections.
Guangxian Zou, Isaac Zhang, Ryan Zarick, Kelvin Wong, Thomas Kim, Daniel L.-K. Wong, Saeid Yazdinejad, Dan Boneh
zkVMs promise general-purpose verifiable computation through ISA-level compatibility with modern programs and toolchains. However, compatibility extends further than just the ISA; modern programs often cannot run or even compile without an operating system and libc. zkVMs attempt to address this by maintaining forks of language-specific runtimes and statically linking them into applications to create self-contained unikernels, but this ad-hoc approach leads to version hell and burdens verifiable applications (vApps) with an unnecessarily large trusted computing base. We solve this problem with ZeroOS, a modular library operating system (libOS) for vApp unikernels; vApp developers can use off-the-shelf toolchains to compile and link only the exact subset of the Linux ABI their vApp needs. Any zkVM team can easily leverage the ZeroOS ecosystem by writing a ZeroOS bootloader for their platform, resulting in a reduced maintainence burden and unifying the entire zkVM ecosystem with consolidated development and audit resources. ZeroOS is free and open-sourced at https://github.com/LayerZero-Labs/ZeroOS
12 December 2025
University College Cork, Ireland
We are seeking a highly motivated researcher to join the UCC Security Research Group and the Insight SFI Research Centre for Data Analytics. The position will contribute to the project Migrants’ Digital Spaces (MIGDIS), a multidisciplinary initiative exploring technology-assisted analysis of home, border, and belonging, while also contributing to Insight research objectives in privacy/security.
The successful candidate will investigate privacy risks in online digital spaces, focusing on communities that connect people across countries and those tied to specific physical locations, such as city-based forums. The research will examine potential privacy breaches, including de-anonymisation attacks, and develop countermeasures using techniques such as differential privacy and cryptographic protocols. This work will require close collaboration with social scientists and other stakeholders, ensuring that technical solutions are informed by societal and ethical considerations.
The ideal applicant holds a PhD in Computer Science or related disciplines and has experience in cyber security and privacy research. They should have a good track record in relevant conferences and journals and has a track record in one or more of the following research areas: privacy enhancing technologies, differential privacy, anonymity, re-identification, and/or cryptography. Previous experience in working on interdisciplinary projects is an asset.
Preference will be given to candidates at postdoctoral level. If the selected candidate has not yet completed their PhD, they will be appointed at the research assistant level.
The successful candidate will investigate privacy risks in online digital spaces, focusing on communities that connect people across countries and those tied to specific physical locations, such as city-based forums. The research will examine potential privacy breaches, including de-anonymisation attacks, and develop countermeasures using techniques such as differential privacy and cryptographic protocols. This work will require close collaboration with social scientists and other stakeholders, ensuring that technical solutions are informed by societal and ethical considerations.
The ideal applicant holds a PhD in Computer Science or related disciplines and has experience in cyber security and privacy research. They should have a good track record in relevant conferences and journals and has a track record in one or more of the following research areas: privacy enhancing technologies, differential privacy, anonymity, re-identification, and/or cryptography. Previous experience in working on interdisciplinary projects is an asset.
Preference will be given to candidates at postdoctoral level. If the selected candidate has not yet completed their PhD, they will be appointed at the research assistant level.
Closing date for applications:
Contact: Dr. Paolo Palmieri at p.palmieri@cs.ucc.ie
More information: https://security.ucc.ie/vacancies.html
Shaoquan Jiang
Quantum authentication is a procedure that sends a quantum message to a receiver without being imperceptibly changed in the channel. How to formalize a proper authentication model is a highly non-trivial task. Existing models have various flaws: they either do not capture serious concerns or are over restricted. Most importantly, none of them have addressed the threat from the verification queries. We show that there is a quantum authentication scheme that is secure when no verification query is allowed while it is completely insecure when verification queries are additionally permitted. The threat of verification queries is not artificial. Our attack only needs to know if a forged authentication message is valid or not. It captures the concern that the adversary can watch if the receiver accepts an authentication or not, without even reading the message authenticated. We propose a quantum authentication model that captures the authentication of multiple messages under the same key as well as the verification queries. We allow the attacker to have his own state entangled with the authentication message. Finally, we propose an authentication framework abstracted from the AQA method in Garg et al. (CRYPTO'17) and prove the security in our model. Our result reduces the security of an authentication protocol to certain properties of its component primitives. We also prove that an impersonation attack implies a substitution attack. To our knowledge, this is the first time to confirm this result.
Suprava Roy, Ratna Dutta
Cloud computing enables data processing, storing and sharing in untrusted environments whose growing adoption necessitates a focus on data security and privacy. Inner product functional encryption (IPFE) is a promising cryptographic technique that enables fine-grained access control over sensitive data in untrusted cloud environments. Post-quantum cryptography focuses on developing cryptographic protocols resilient to quantum computer attacks, with lattice structures being crucial in designing these protocols.
This paper aims to implement the lattice-based public key unbounded IPFE (uIPFE) scheme proposed by Dutta et al. (2022) [1] which ensures that no specific vector’s length bound needs to be fixed during public parameter generation.
Furthermore, we extend the ALS-IPFE scheme of Agrawal et al. (2016) [2] to create a new public key scheme uIPFE #1, achieving adaptive indistinguishability security in the random oracle model under the Learning with Errors assumption, while avoiding trapdoor generation and pre-image sampling algorithms.
This work aims to enhance the practical applicability of uIPFE in cloud computing environments. We implement both the schemes uIPFE and uIPFE #1 in C programming language and execute the code on IBM Power 9 server. Moreover, we analyze the running time and discuss the performance of both schemes based on varying message vector lengths.
Varsha Jarali, Hari Preeth S, Khushboo Bussi, Shashi Kant Pandey
Authenticity and confidentiality are crucial for maintaining a secure information infrastructure. Confidentiality prevents unauthorized disclosure, while authenticity ensures origin of the data.. Authenticated encryption ensures both simultaneously by protecting data from access and verifying integrity. This paper presents a NeevAs cipher suite offering authenticated encryption with associated data (AEAD) and hashing, based on a sponge-based duplex construction. The scheme included concurrent absorption, a novel sponge method, where associated data is absorbed into the capacity part and plaintext into the rate part simultaneously. This reduces permutation calls typically required for associated data processing compared to existing sponge based designs. In the NeevAs cipher suite, the modified Neeva hash function generates the security tag, and the Ascon S-box provides efficiency and resistance against differential and boomerang attacks. The design targets lightweight cryptography, minimizing permutation calls to enhance efficiency and reduce resource consumption, making it well-suited for resource-constrained devices.
Jianming Lin, Yu Dai, Chang-An Zhao, Yuhao Zheng
Subgroup membership testing serves as a crucial countermeasure against small subgroup attacks, thereby ensuring the security of pairing-based cryptographic protocols. Despite its vital importance, the expensive computational requirements for membership testing on specific pairing-friendly curves pose a non-negligible challenge.
In this paper, we revisit the $\mathbb{G}_2$ membership testing algorithms on KSS16 curves and propose a novel approach specifically designed for the families constructed by the KSS method (Kachisa-Schaefer-Scott method).
Moreover, we generalize several previous methods for $\mathbb{G}_2$ membership testing, rendering them applicable to more generic pairing-friendly curves.
Specifically, we implement an efficient $\mathbb{G}_2$ membership testing on three well-known curves KSS16-329, KSS16-330, and KSS16-766 for verification. The experimental results illustrate that our new method achieves improvements of $24.0\%$, $33.3\%$, and $29.2\%$ in terms of clock cycles compared to the state-of-the-art, respectively.
Anisha Dutta, Sayantan Chakraborty, Chandan Goswami, Avishek Adhikari
The rapid growth of quantum technologies highlights the need for
scalable models of computation that go beyond qubits and exploit the richer structure of Qudits. This paper introduces a novel and efficient approach for defining universal gate sets specifically designed for higher-dimensional Qudit systems (N ≥ 2), addressing the limitations of traditional qubit-based approaches. We present a systematic methodology for constructing fundamental Qudit gates through the inherent structure of Qudit operators, providing a robust theoretical foundation for universal quantum computation with Qudits. Our rigorously proven universal and minimal gate set enables more efficient quantum circuit design. We demonstrate the construction of multidimensional extensions of controlled operations from these fundamental elements. To facilitate practical application, we provide a Python-based algorithm for decomposing arbitrary multi-Qudit operations, accompanied by detailed time- and space-complexity analyses. The framework is further validated through end-to-end implementations of Grover’s algorithm and QKD, comparing traditional gate constructions with circuits entirely synthesized from the proposed universal gate set. These validations demonstrate not only the functional equivalence of the decomposed circuits, but also their direct relevance to the advancement of cryptographic protocols, paving the way for more efficient and secure Qudit-based quantum cryptography.
Jonas Hofmann, Philipp-Florens Lehwalder, Shahriar Ebrahimi, Parisa Hassanizadeh, Sebastian Faust
Remote attestation is a fundamental security mechanism for assessing the integrity of remote devices. In practice, widespread adoption of attestation schemes is hindered by a lack of public verifiability and the requirement for interaction in existing protocols. A recent work by Ebrahimi et al. (NDSS'24) constructs publicly verifiable, non-interactive remote attestation, disregarding another important requirement for attesting sensitive systems: privacy protection.
Similar needs arise in IoT swarms, where many devices, potentially processing sensitive data, should produce a single attestation.
In this paper, we take on both challenges. We present PIRANHAS, a publicly verifiable, asynchronous, and anonymous attestation scheme for individual devices and swarms. We leverage zk-SNARKs to transform any classical, symmetric remote attestation scheme into a non-interactive, publicly verifiable, and anonymous one. Verifiers only ascertain the validity of the attestation, without learning any identifying information about the involved devices.
For IoT swarms, PIRANHAS aggregates attestation proofs for the entire swarm using recursive zk-SNARKs. Our system supports arbitrary network topologies and allows nodes to dynamically join and leave the network. We provide formal security proofs for the single-device and swarm setting, showing that our construction meets the desired security guarantees. Further, we provide an open-source implementation of our scheme using the Noir and Plonky2 framework, achieving an aggregation runtime of just 356ms.
In this paper, we take on both challenges. We present PIRANHAS, a publicly verifiable, asynchronous, and anonymous attestation scheme for individual devices and swarms. We leverage zk-SNARKs to transform any classical, symmetric remote attestation scheme into a non-interactive, publicly verifiable, and anonymous one. Verifiers only ascertain the validity of the attestation, without learning any identifying information about the involved devices.
For IoT swarms, PIRANHAS aggregates attestation proofs for the entire swarm using recursive zk-SNARKs. Our system supports arbitrary network topologies and allows nodes to dynamically join and leave the network. We provide formal security proofs for the single-device and swarm setting, showing that our construction meets the desired security guarantees. Further, we provide an open-source implementation of our scheme using the Noir and Plonky2 framework, achieving an aggregation runtime of just 356ms.
Yuanmi Chen, Zhao Chen, Tingting Guo, Chao Sun, Weiqiang Wen, Yu Yu
We incorporate a meet-in-the-middle strategy into the enumeration algorithm, enabling a tunable time-memory trade-off. The algorithm can achieve a minimum asymptotic time complexity of \(n^{n/(4e) + o(n)}\), which, in return, demands memory of the same order. This represents a square-root improvement over the state-of-the-art enumeration algorithms. More generally, our approach attains a time complexity of~$n^{cn/(2e) + o(n)}$ with memory usage \(n^{(1-c)n/(2e) + o(n)}\), where $c$ is any constant satisfying \(\frac{1}{2} \leq c < 1\).
Our approach decouples the head and tail blocks of the lattice basis. For a properly selected parameter, each enumeration space becomes asymptotically the square root of the original search space. Each tail vector is then extended to the head block space to find its closest vectors using an efficient neighboring search algorithm. Among all pairs of neighboring vectors that we iterate through, the shortest difference vector is then the solution to the Shortest Vector Problem (SVP).
Apart from the exact version of the algorithm which is of theoretical interest, we also propose heuristic strategies to improve the practical efficiency. First, we show the adaptation of our algorithm to pruned enumeration. Then we show that with a particularly chosen backbone lattice (rescaled~\(\mathbb{Z}^n\)), we are able to accelerate the neighboring search process to an extremely efficient degree. Finally, we optimize parameters and give a practical cost estimation to show how much acceleration we could bring using this new algorithm.
Our approach decouples the head and tail blocks of the lattice basis. For a properly selected parameter, each enumeration space becomes asymptotically the square root of the original search space. Each tail vector is then extended to the head block space to find its closest vectors using an efficient neighboring search algorithm. Among all pairs of neighboring vectors that we iterate through, the shortest difference vector is then the solution to the Shortest Vector Problem (SVP).
Apart from the exact version of the algorithm which is of theoretical interest, we also propose heuristic strategies to improve the practical efficiency. First, we show the adaptation of our algorithm to pruned enumeration. Then we show that with a particularly chosen backbone lattice (rescaled~\(\mathbb{Z}^n\)), we are able to accelerate the neighboring search process to an extremely efficient degree. Finally, we optimize parameters and give a practical cost estimation to show how much acceleration we could bring using this new algorithm.
Clément Hoffmann, Pierrick Méaux, Charles Momin, Yann Rotella, François-Xavier Standaert, Balazs Udvarhelyi
Fresh re-keying is a countermeasure against side-channel analysis where an ephemeral key is derived from a long-term key using a public random value. Popular instances of such schemes rely on key-homomorphic primitives, so that the re-keying process is easy to mask and the rest of the (e.g., block cipher) computations can run with cheaper countermeasures. The main requirement for these schemes to be secure is that the leakages of the ephemeral keys do not allow recovering the long-term key. The Learning with Physical Rounding (LWPR) problem formalizes this security in a practically-relevant model where the adversary can observe noise-free leakages. It can be viewed as a physical version of the Learning With Rounding (LWR) problem, where the rounding is performed by a leakage function and therefore does not have to be computed explicitly. In this paper, we first consolidate the intuition that LWPR cannot be secure in a serial implementation context without additional countermeasures (like shuffling), due to attacks exploiting worst-case leakages that can be mounted with practical data complexity. We then extend the understanding of LWPR in a parallel implementation setting. On the one hand, we generalize its robustness against cryptanalysis taking advantage of any (i.e., not only worst-case) leakage. A previous work claimed security in the specific context of a Hamming weight leakage function. We clarify necessary conditions to maintain this guarantee, based on the degree of the leakage function and the accuracy of its coefficients. On the other hand, we show that parallelism inherently provides good security against attacks exploiting worst-case leakages. We finally confirm the practical relevance of these findings by validating our assumptions experimentally for an exemplary implementation.
Clément Hoffmann, Pierrick Méaux, Mélissa Rossi, François-Xavier Standaert
Learning problems have become a foundational element for constructing quantum-resistant cryptographic schemes, finding broad application even beyond, such as in Fully Homomorphic Encryption. The increasing complexity of this field, marked by the rise of physical learning problems due to research into side-channel leakage and secure hardware implementations, underscores the urgent need for a more comprehensive analytical framework capable of encompassing these diverse variants.
In response, we introduce Learning With Error with Output Dependencies (LWE-OD), a novel learning problem defined by an error distribution that depends on the inner product value and therefore on the key. LWE-OD instances are remarkably versatile, generalizing both established theoretical problems like Learning With Errors (LWE) or Learning With Rounding (LWR), and emerging physical problems such as Learning With Physical Rounding (LWPR).
Our core contribution is establishing a reduction from LWE-OD to LWE. This is accomplished by leveraging an intermediate problem, denoted qLWE. Our reduction follows a two-step, simulator-based approach, yielding explicit conditions that guarantee LWE-OD is at least as computationally hard as LWE. While this theorem provides a valuable reduction, it also highlights a crucial distinction among reductions: those that allow explicit calculation of target distributions versus weaker ones with conditional results. To further demonstrate the utility of our framework, we offer new proofs for existing results, specifically the reduction from LWR to LWE and from Learning Parity with Noise with Output Dependencies (LPN-OD) to LPN. This new reduction opens the door for a potential reduction from LWPR to LWE.
In response, we introduce Learning With Error with Output Dependencies (LWE-OD), a novel learning problem defined by an error distribution that depends on the inner product value and therefore on the key. LWE-OD instances are remarkably versatile, generalizing both established theoretical problems like Learning With Errors (LWE) or Learning With Rounding (LWR), and emerging physical problems such as Learning With Physical Rounding (LWPR).
Our core contribution is establishing a reduction from LWE-OD to LWE. This is accomplished by leveraging an intermediate problem, denoted qLWE. Our reduction follows a two-step, simulator-based approach, yielding explicit conditions that guarantee LWE-OD is at least as computationally hard as LWE. While this theorem provides a valuable reduction, it also highlights a crucial distinction among reductions: those that allow explicit calculation of target distributions versus weaker ones with conditional results. To further demonstrate the utility of our framework, we offer new proofs for existing results, specifically the reduction from LWR to LWE and from Learning Parity with Noise with Output Dependencies (LPN-OD) to LPN. This new reduction opens the door for a potential reduction from LWPR to LWE.
Friedrich Wiemer, Arthur Mutter, Jonathan Ndop, Julian Göppert, Axel Sikora, Thierry Walrant
In the past, Secure Onboard Communication (SecOC) has been defined to serve as the foundational mechanism for securing in-vehicle networks. For over a decade, it has been used in hundreds of millions of automotive systems. Its application-layer design and AUTOSAR-based specification have enabled broad adoption across diverse platforms. However, this design also introduces challenges: software-centric dependencies complicate full hardware integration and can limit scalability in resource-constrained electronic control units, particularly as bandwidth demands continue to grow.
To address these constraints, CANsec has been proposed to address the security objectives of CAN XL. As a Layer 2 security protocol, CANsec aims to overcome SecOC’s shortcomings and offer modern guarantees comparable to MACsec. However, the CANsec specification remains under active development even after several years of work, leaving a gap between conceptual goals and practical deployment.
This paper proposes a pragmatic and standards-aligned solution: it re-uses existing MACsec specifications and implementations as the security engine for CAN XL.
MACsec, standardized for over two decades and widely scrutinized in both academic and industrial contexts, offers robust and well-understood security functions. Our approach introduces a lightweight wrapper that maps CAN XL frames to virtual Ethernet frames, enabling MACsec to provide confidentiality, integrity, authenticity, and freshness. We formally define the wrapping process, frame formats, and protocol data units, while preserving MACsec’s security properties, adapting them to the constraints and requirements of automotive networks. This enables a practical and secure path forward for CAN XL deployment, leveraging mature cryptographic algorithms and protocols without compromising performance or assurance.
To support standardization and practical adoption, we have submitted this approach to the CAN in Automation (CiA) CANsec specification task force, CiA's IG 04 SIG 01 TF 03 CAN XL security, contributing to the ongoing effort to define an efficient, standardized and interoperable security solution for CAN XL.
To address these constraints, CANsec has been proposed to address the security objectives of CAN XL. As a Layer 2 security protocol, CANsec aims to overcome SecOC’s shortcomings and offer modern guarantees comparable to MACsec. However, the CANsec specification remains under active development even after several years of work, leaving a gap between conceptual goals and practical deployment.
This paper proposes a pragmatic and standards-aligned solution: it re-uses existing MACsec specifications and implementations as the security engine for CAN XL.
MACsec, standardized for over two decades and widely scrutinized in both academic and industrial contexts, offers robust and well-understood security functions. Our approach introduces a lightweight wrapper that maps CAN XL frames to virtual Ethernet frames, enabling MACsec to provide confidentiality, integrity, authenticity, and freshness. We formally define the wrapping process, frame formats, and protocol data units, while preserving MACsec’s security properties, adapting them to the constraints and requirements of automotive networks. This enables a practical and secure path forward for CAN XL deployment, leveraging mature cryptographic algorithms and protocols without compromising performance or assurance.
To support standardization and practical adoption, we have submitted this approach to the CAN in Automation (CiA) CANsec specification task force, CiA's IG 04 SIG 01 TF 03 CAN XL security, contributing to the ongoing effort to define an efficient, standardized and interoperable security solution for CAN XL.
Alan T. Sherman, Jeremy J. Romanik Romano, Edward Zieglar, Enis Golaszewski, Jonathan D. Fuchs, William E. Byrd
We analyze security aspects of the SecureDNA system regarding its system design, engineering, and implementation. This system enables DNA synthesizers to screen order requests against a database of hazards. By applying novel cryptography involving distributed oblivious pseudorandom functions, the system aims to keep order requests and the database of hazards secret. Discerning the detailed operation of the system in part from source code (Version 1.0.8), our analysis examines key management, certificate infrastructure, authentication, and rate-limiting mechanisms. We also perform the first formal-methods analysis of the mutual authentication, basic request, and exemption-handling protocols.
Without breaking the cryptography, our main finding is that SecureDNA's custom mutual authentication protocol SCEP achieves only one-way authentication: the hazards database and keyservers never learn with whom they communicate. This structural weakness violates the principle of defense in depth and enables an adversary to circumvent rate limits that protect the secrecy of the hazards database, if the synthesizer connects with a malicious or corrupted keyserver or hashed database. We point out an additional structural weakness that also violates the principle of defense in depth: inadequate cryptographic bindings prevent the system from detecting if responses, within a TLS channel, from the hazards database were modified. Consequently, if a synthesizer were to reconnect with the database over the same TLS session, an adversary could replay and swap responses from the database without breaking TLS. Although the SecureDNA implementation does not allow such reconnections, it would be stronger security engineering to avoid the underlying structural weakness. We identify these vulnerabilities and suggest and verify mitigations, including adding strong bindings. Software Version 1.1.0 fixes SCEP with our proposed SCEP+ protocol.
Our work illustrates that a secure system needs more than sound mathematical cryptography; it also requires formal specifications, sound key management, proper binding of protocol message components, and careful attention to engineering and implementation details.
Miranda Christ, Noah Golowich, Sam Gunn, Ankur Moitra, Daniel Wichs
Watermarks are an essential tool for identifying AI-generated content. Recently, Christ and Gunn (CRYPTO '24) introduced pseudorandom error-correcting codes (PRCs), which are equivalent to watermarks with strong robustness and quality guarantees. A PRC is a pseudorandom encryption scheme whose decryption algorithm tolerates a high rate of errors. Pseudorandomness ensures quality preservation of the watermark, and error tolerance of decryption translates to the watermark's ability to withstand modification of the content.
In the short time since the introduction of PRCs, several works (NeurIPS '24, RANDOM '25, STOC '25) have proposed new constructions. Curiously, all of these constructions are vulnerable to quasipolynomial-time distinguishing attacks. Furthermore, all lack robustness to edits over a constant-sized alphabet, which is necessary for a meaningfully robust LLM watermark. Lastly, they lack robustness to adversaries who know the watermarking detection key. Until now, it was not clear whether any of these properties was achievable individually, let alone together.
We construct pseudorandom codes that achieve all of the above: plausible subexponential pseudorandomness security, robustness to worst-case edits over a binary alphabet, and robustness against even computationally unbounded adversaries that have the detection key. Pseudorandomness rests on a new assumption that we formalize, the permuted codes conjecture, which states that a distribution of permuted noisy codewords is pseudorandom. We show that this conjecture is implied by the permuted puzzles conjecture used previously to construct doubly efficient private information retrieval. To give further evidence, we show that the conjecture holds against a broad class of simple distinguishers, including read-once branching programs.
In the short time since the introduction of PRCs, several works (NeurIPS '24, RANDOM '25, STOC '25) have proposed new constructions. Curiously, all of these constructions are vulnerable to quasipolynomial-time distinguishing attacks. Furthermore, all lack robustness to edits over a constant-sized alphabet, which is necessary for a meaningfully robust LLM watermark. Lastly, they lack robustness to adversaries who know the watermarking detection key. Until now, it was not clear whether any of these properties was achievable individually, let alone together.
We construct pseudorandom codes that achieve all of the above: plausible subexponential pseudorandomness security, robustness to worst-case edits over a binary alphabet, and robustness against even computationally unbounded adversaries that have the detection key. Pseudorandomness rests on a new assumption that we formalize, the permuted codes conjecture, which states that a distribution of permuted noisy codewords is pseudorandom. We show that this conjecture is implied by the permuted puzzles conjecture used previously to construct doubly efficient private information retrieval. To give further evidence, we show that the conjecture holds against a broad class of simple distinguishers, including read-once branching programs.
Magali Salom, Nicolas Sendrier, Valentin Vasseur
QC-MDPC based schemes feature secret sparse cyclic binary vectors. When those vectors are sparse enough, they can be reconstructed from their distance spectrum, that is the set of all distances between the coordinates of the non-zero coefficients. In this work, we revisit the reconstruction algorithms and we explore to what extent a secret sparse vector can be recovered from a partial knowledge of its distance spectrum. In particular, we show how to efficiently use reliability (soft information) in the reconstruction process. Another aspect of this work is to investigate which kind of side-channel leaks information about the distance spectrum and to understand the models that enable us to quantify the reliability on leaking data depending on the amount of side-channel observations (or queries). For instance, we show that for BIKE level 1, assuming that a side-channel leaks information about the syndrome weight, using soft information in the reconstruction process reduces the number of queries by a factor 10. Our technique can also be applied to HQC, which also features sparse secret vector, with similar figures, assuming there exists a side-channel leaking relevant information, the error weight in the case of HQC.
Suleyman Kardas, Mehmet Sabir Kiraz, Dmitry Savonin, Yao Wang, Aliaksei Dziadziuk
Zero-knowledge virtual machines (zkVMs) must correctly model all edge cases of lowlevel machine instructions, including division and remainder, while keeping algebraic constraints simple enough for efficient STARK proving. This work presents a focused but meaningful redesign of the DivRemChip used in the SP1 zkVM (as adapted to rWasm) to implement WASM’s 32-bit division and remainder instructions (i32.div{u,s}, i32.rem{u,s}) over the BabyBear field. Our chip remains local (single-row AIR) and is purpose-built so that all “small aggregate” expressions are strictly bounded by the BabyBear modulus, allowing us to use inverses securely without ever inverting zero. We replace heavier constant-equality logic and trap gadgets with:
(a) a lightweight small-aggregate zero-test for magnitudes that is sound in BabyBear,
(b) a simple, byte-level mechanism for detecting the special value INT MIN using only degree-2 constraints,
(c) a low-degree test for c = −1 based on checking |c| = 1 together with the sign bit, and
(d) a constraint pattern ensuring that divide-by-zero and the overflow case i32.div s(INT MIN,-1) are not provable, matching WASM trap semantics.
The resulting AIR has maximal degree 3, removes all “invert-zero” failure modes in BabyBear, and enforces the correct semantics for every non-trapping instruction. A malicious prover test framework based on the SP1 CPU bus shows that any incorrect witness causes constraint failure, while honest execution traces for extensive boundary tests and mixed programs produce valid proofs over BabyBear. Our design also improves efficiency: the trace width drops from 102 to 74 columns, all Mul/Add/Lt/MSB cross-chip calls are removed, and each row requires only a single CPU-bus interaction. Prover benchmarks on unsigned, signed, and mixed workloads with up to 125,000 division/remainder operations show a consistent 4–6% reduction in single-threaded proving time compared to the original SP1 DivRemChip as adapted to rWasm, with a paired t-test across program sizes confirming that the improvement is statistically significant at the 95% confidence level.
Mohamed Malhou, Ludovic Perret, Kristin Lauter
At NeurIPS 2024, Kera et al. introduced the use of transformers for computing Groebner bases, a central object in computer algebra with numerous practical applications. In this paper, we improve this approach by applying Hierarchical Attention Transformers (HATs) to solve systems of multivariate polynomial equations via Groebner bases computation. The HAT architecture incorporates a tree-structured inductive bias that enables the modeling of hierarchical relationships present in the data and thus achieves significant computational savings compared to conventional flat attention models. We generalize to arbitrary depths and include a detailed computational cost analysis. Combined with curriculum learning, our method solves instances that are much larger than those in Kera et al. (2024 Learning to compute Groebner bases)