IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
30 April 2025
Fukang Liu, Vaibhav Dixit, Santanu Sarkar, Willi Meier, Takanori Isobe
We study the problem of how to find the inverse of shift invariant (SI) transformations proposed in Daemen's thesis. In particular, two of them have been used in practice: $y_i=x_i\oplus \overline{x_{i+1}}x_{i+2}$ and $y_i=x_i\oplus \overline{x_{i+1}}x_{i+2}x_{i+3}$. The first one is the well-known $\chi$ transformation used in \textsf{SHA-3}, \textsf{Subterranean 2.0} and \textsf{Rasta}, while the second one is used in a recently proposed ZK-friendly hash function called Monolith. While the concrete formula of the inverse of $\chi$ of arbitrary size has been given and proved by Liu et al. at JoC 2022, it remains unknown how to deduce such a formula and how to systematically study other SI transformations.
In this work, we aim to provide a general method and flow to find the inverse of SI transformations, though it is still limited to some specific types and it may not work for all such transformations. However, such a general method does shed new insight on how to find their inverse, as we can apply this method to several different SI transformations, including the one used in Monolith. We expect that this method can be further generalized and applied to more SI transformations.
Syed Mahbub Hafiz, Bahattin Yildiz, Marcos A. Simplicio Jr, Thales B. Paiva, Henrique Ogawa, Gabrielle De Micheli, Eduardo L. Cominetti
Lattices are the basis of most NIST-recommended post-quantum cryptography (PQC) schemes, required to thwart the threat posed by the eventual construction of large-scale quantum computers. At the same time, lattices enable more advanced cryptographic constructions, such as fully homomorphic encryption (FHE), which is increasingly used for privacy-preserving applications like machine learning. This work delves into the efficiency and trade-off assessment of polynomial multiplication algorithms and their applications to PQC, FHE, and other schemes. Such algorithms are at the core of lattice-based cryptography and may become a critical bottleneck when deploying PQC- and FHE-based solutions on resource-constrained devices. We propose a formal analysis of so-called incompleteness in the Number Theoretic Transform (NTT). Although this concept is not new, our systematization shows how to optimize polynomial multiplication in quotient rings, considering factors such as the degree of incompleteness, the associated prime moduli, constraints of the target platform, and target security level. Besides efficiency, we formally show that the systematized family of incomplete NTT variants supports a larger set of prime moduli. This property enables new trade-offs for algorithms like the FIPS-approved module-lattice-based key encapsulation mechanism (ML-KEM) and faster amortized bootstrapping in FHE schemes. Our results include shorter ciphertexts in ML-KEM with only a modest hit in performance and a 6-42% performance boost in the NTT computation of a state-of-the-art FHE solution.
Jiwon Kim, Abhiram Kothapalli, Orestis Chardouvelis, Riad S. Wahby, Paul Grubbs
In recent years, online anonymity has become increasingly important but is under threat due to the challenges of moderating anonymous spaces. A promising cryptographic solution, known as anonymous blocklisting, allows users to post anonymously while still enabling moderation. Moderation via anonymous blocklisting roughly works by requiring that when users post a message they attach a cryptographic proof that they did not author any posts on a “blocklist”.
Existing anonymous blocklisting schemes are unfortunately still far from achieving practical performance for large blocklists. This is essentially due to all prior works requiring a user to (cryptographically) reprocess blocklist entries many times. Relatedly, prior works have relatively high verification times and proof sizes.
In this work, we introduce ALPACA, the first anonymous blocklisting system with the property that a user only needs to do a constant amount of work per blocklist entry. Thus, our scheme has asymptotically optimal performance. Our scheme is also the first to have verification times and proof sizes that are independent of the number of blocklist entries.
Our key technique is a new variant of incrementally verifiable computation (IVC), designed to ensure anonymity. Along the way, we introduce new definitions to formally establish security. On a mid-range laptop, ALPACA’s proof generation time is always 6.15 seconds and proof size is 25.6KBs. On a server, the verification time is always 400ms.
Nicholas Brandt
We present conceptually simple constructions of verifiable random functions (VRF) that fulfill strong notions of unbiasability recently introduced by Giunta and Stewart [EC:GS24]. VRFs with such strong properties were previously only known in the random oracle model or from the decisional Diffie–Hellman assumption with preprocessing. In contrast, our constructions are based on generic assumptions and are thus the first to be plausibly post-quantum secure. Moreover, our constructions fulfill several additional properties such as:
• If the underlying VRF is aggregate, key-homomorphic or computable in \(\mathsf{NC}^1\), then so is our VRF.
• For any verification key, the VRF output has almost the same min-entropy as the VRF input.
Lastly, we outline a path towards a lattice-based VRF (without setup).
Muyang Li, Yueteng Yu, Bangyan Wang, Xiong Fan, Shuwen Deng
Zero-Knowledge Proof (ZKP) is a cornerstone technology in privacy-preserving computing, addressing critical challenges in domains such as finance and healthcare by ensuring data confidentiality during computation. However, the high computational overhead of ZKP, particularly in proof generation and verification, limits its scalability and usability in real-world applications. Existing efforts to accelerate ZKP primarily focus on specific components, such as polynomial commitment schemes or elliptic curve operations, but fail to deliver an integrated, flexible, and efficient end-to-end solution that includes witness generation.
In this work, we present ZKPoG, a GPU-based ZKP acceleration platform that achieves full end-to-end optimization. ZKPoG addresses three key challenges: (1) designing a witness-generation-incorporated flow for Plonkish circuits, enabling seamless integration of frontend and backend with GPU acceleration; (2) optimizing memory usage to accommodate large-scale circuits on affordable GPUs with limited memory; and (3) introducing an automated compiler for custom gates, simplifying adaptation to diverse applications. Experimental results on an NVIDIA RTX 4090 GPU show on average $22.8\times$ end-to-end acceleration compared to state-of-the-art CPU implementations and on average $12.7\times$ speedup over existing GPU-based approaches.
In this work, we present ZKPoG, a GPU-based ZKP acceleration platform that achieves full end-to-end optimization. ZKPoG addresses three key challenges: (1) designing a witness-generation-incorporated flow for Plonkish circuits, enabling seamless integration of frontend and backend with GPU acceleration; (2) optimizing memory usage to accommodate large-scale circuits on affordable GPUs with limited memory; and (3) introducing an automated compiler for custom gates, simplifying adaptation to diverse applications. Experimental results on an NVIDIA RTX 4090 GPU show on average $22.8\times$ end-to-end acceleration compared to state-of-the-art CPU implementations and on average $12.7\times$ speedup over existing GPU-based approaches.
Alex B. Grilo, Lucas Hanouz, Anne Marin
Secret sharing is a fundamental primitive in cryptography, and it can be achieved even with perfect security. However, the distribution of shares requires computational assumptions, which can compromise the overall security of the protocol. While traditional Quantum Key Distribution (QKD) can maintain security, its widespread deployment in general networks would incur prohibitive costs.
In this work, we present a quantum protocol for distributing additive secret sharing of 0, which we prove to be composably secure within the Abstract Cryptography framework. Moreover, our protocol targets the Qline, a recently proposed quantum network architecture designed to simplify and reduce the cost of quantum communication. Once the shares are distributed, they can be used to securely perform a wide range of cryptographic tasks, including standard additive secret sharing, anonymous veto, and symmetric key establishment.
In this work, we present a quantum protocol for distributing additive secret sharing of 0, which we prove to be composably secure within the Abstract Cryptography framework. Moreover, our protocol targets the Qline, a recently proposed quantum network architecture designed to simplify and reduce the cost of quantum communication. Once the shares are distributed, they can be used to securely perform a wide range of cryptographic tasks, including standard additive secret sharing, anonymous veto, and symmetric key establishment.
Axel Lemoine
We propose a new method for retrieving the algebraic structure of a generic alternant code given an arbitrary generator matrix, provided certain conditions are met. We then discuss how this challenges the security of the McEliece cryptosystem instantiated with this family of codes. The central object of our work is the quadratic hull related to a linear code, defined as the intersection of all quadrics passing through the columns of a given generator or parity-check matrix, where the columns are considered as points in the affine or projective space. The geometric properties of this object reveal important information about the internal algebraic structure of the code. This is particularly evident in the case of generalized Reed-Solomon codes, whose quadratic hull is deeply linked to a well-known algebraic variety called the rational normal curve. By utilizing the concept of Weil restriction of affine varieties, we demonstrate that the quadratic hull of a generic dual alternant code inherits many interesting features from the rational normal curve, on account of the fact that alternant codes are subfield-subcodes of generalized Reed-Solomon codes. If the rate of the generic alternant code is sufficiently high, this allows us to construct a polynomial-time algorithm for retrieving the underlying generalized Reed-Solomon code from which the alternant code is defined, which leads to an efficient key-recovery attack against the McEliece cryptosystem when instantiated with this class of codes. Finally, we discuss the generalization of this approach to Algebraic-Geometry codes and Goppa codes.
Mahdi Rahimi
Mix networks (mixnets) offer robust anonymity even against adversaries monitoring all network links; however, they impose high latency on communications. To address this, recent research has explored strategic low-latency routing within mixnets. While these strategies appear to reduce latency, their impact on mixnet anonymity has not been carefully assessed, raising concerns about potential deanonymization of clients. Tackling this challenge, this paper first quantifies the anonymity loss associated with low-latency routing techniques in mixnets. Building on these insights, second, we introduce a novel low-latency routing method that maintains mixnet anonymity while achieving significant latency reductions compared to the state-of-the-art solution LARMix (NDSS, 2024). Our approach also ensures a more balanced load distribution among mixnet nodes. Moreover, under adversarial conditions where parts of the mixnet are compromised, our method does not confer significant advantages to the adversary, unlike LARMix. Thus, our proposal emerges as the optimal choice for low-latency routing in mixnets. Furthermore, we note that this version expands on both the analytical and experimental results of the previously published paper in NCA 2024, specifically through investigating the anonymity of Loopix-like mixnets.
Chandan Kumar, Nimish Mishra, Suvradip Chakraborty, Satrajit Ghosh, Debdeep Mukhopadhyay
Reverse firewalls (RFs), introduced by Mironov and Stephens Davidowitz at Eurocrypt 2015, provide a defence mechanism for cryptographic protocols against subversion attacks. In a subversion setting, an adversary compromises the machines of honest parties, enabling the leakage of their secrets through the protocol transcript. Previous research in this area has established robust guarantees, including resistance against data exfiltration for an RF. In this work, we present a new perspective focused on the implementation specifics of RFs. The inherently untrusted nature of RFs exposes their real-world implementations to the risk of Trojan insertion — an especially pressing issue in today’s outsourced supply chain ecosystem. We argue how Trojan-affected RF implementations can compromise their core exfiltration resistance property, leading to a complete breakdown of the RF’s security guarantees.
Building on this perspective, we propose an enhanced definition for ``Trojan-resilient Reverse Firewalls'' (Tr-RF), incorporating an additional Trojan resilience property. We then present concrete instantiations of Tr-RFs for Coin Tossing (CT) and Oblivious Transfer (OT) protocols, utilizing techniques from Private Circuit III (CCS'16) to convert legacy RFs into Tr-RFs. We also give simulation-based proofs to claim the enhanced security guarantees of our Tr-RF instantiations. Additionally, we offer concrete implementations of our Tr-RF based CT and OT protocols utilizing the Open-Portable Trusted Execution Environment (OP-TEE). Through OP-TEE, we practically realize assumptions made in Private Circuit III that are critical to ensuring Tr-RF security, bridging the gap between theoretical models and real-world applications. To the best of our knowledge, this provides the first practical implementation of reverse firewalls for any cryptographic functionality. Our work emphasizes the importance of evaluating protocol specifications within implementation-specific contexts.
Building on this perspective, we propose an enhanced definition for ``Trojan-resilient Reverse Firewalls'' (Tr-RF), incorporating an additional Trojan resilience property. We then present concrete instantiations of Tr-RFs for Coin Tossing (CT) and Oblivious Transfer (OT) protocols, utilizing techniques from Private Circuit III (CCS'16) to convert legacy RFs into Tr-RFs. We also give simulation-based proofs to claim the enhanced security guarantees of our Tr-RF instantiations. Additionally, we offer concrete implementations of our Tr-RF based CT and OT protocols utilizing the Open-Portable Trusted Execution Environment (OP-TEE). Through OP-TEE, we practically realize assumptions made in Private Circuit III that are critical to ensuring Tr-RF security, bridging the gap between theoretical models and real-world applications. To the best of our knowledge, this provides the first practical implementation of reverse firewalls for any cryptographic functionality. Our work emphasizes the importance of evaluating protocol specifications within implementation-specific contexts.
Mojtaba Fadavi, Seyyed Arash Azimi, Sabyasachi Karati, Samuel Jaques
A group signature scheme enables users of a group to anonymously sign messages on behalf of the group, while a designated authority can revoke anonymity when needed to ensure user accountability. Among group signature schemes, fully dynamic ones are particularly desirable as they allow new users to join and misbehaved existing users to be revoked without requiring system-wide updates.
This paper introduces DGSP, a post-quantum fully dynamic group signature scheme that addresses key limitations of existing schemes like DGMT and SPHINX-in-the-Head (SITH). Leveraging the properties of ${\rm SPHINCS}^+$, DGSP achieves a superior balance between scalability and efficiency by (i) supporting up to $2^{60}$ users, (ii) requiring negligible setup time, and (iii) featuring efficient algorithms with short signatures of constant-size. Notably, while DGMT is limited to $2^{15}$ users, DGSP extends this to $2^{60}$ while keeping signatures compact—only 3.03 to 4.93 times larger than those of DGMT, yet just 0.021 to 0.004 times the size of SITH signatures. This makes DGSP a compelling solution for applications requiring both large-scale user support and signature efficiency in the post-quantum setting. Moreover, DGSP strengthens managerial accountability compared to DGMT by (i) enabling users to verify their certificates generated by the manager and (ii) ensuring public verifiability of the manager's signature attribution. Although SITH claims to support $2^{60}$ users, our analysis reveals that its opening algorithm is highly inefficient, making it impractical to handle such a large number of users.
Our security analysis shows that DGSP achieves unforgeability, anonymity, and traceability in the standard model. We also provide a complete implementation of DGSP. Our experimental study exhibits that DGSP is superior over existing schemes in terms of efficiency and scalability.
Santiago Arranz-Olmos, Gilles Barthe, Benjamin Grégoire, Jan Jancar, Vincent Laporte, Tiago Oliveira, Peter Schwabe
It is a widely accepted standard practice to implement cryptographic software so that secret inputs do not influence the cycle count. Software following this paradigm is often referred to as "constant-time" software and typically involves following three rules: 1) never branch on a secret-dependent condition, 2) never access memory at a secret-dependent location, and 3) avoid variable-time arithmetic operations on secret data. The third rule requires knowledge about such variable-time arithmetic instructions, or vice versa, which operations are safe to use on secret inputs. For a long time, this knowledge was based on either documentation or microbenchmarks, but critically, there were never any guarantees for future microarchitectures. This changed with the introduction of the data-operand-independent-timing (DOIT) mode on Intel CPUs and, to some extent, the data-independent-timing (DIT) mode on Arm CPUs. Both Intel and Arm document a subset of their respective instruction sets that are intended to leak no information about their inputs through timing, even on future microarchitectures if the CPU is set to run in a dedicated DOIT (or DIT) mode.
In this paper, we present a principled solution that leverages DOIT to enable cryptographic software that is future-proof constant-time, in the sense that it ensures that only instructions from the DOIT subset are used to operate on secret data, even during speculative execution after a mispredicted branch or function return location. For this solution, we build on top of existing security type systems in the Jasmin framework for high-assurance cryptography.
We then use our solution to evaluate the extent to which existing cryptographic software built to be "constant-time" is already secure in this stricter paradigm implied by DOIT and what the performance impact is to move from constant-time to future-proof constant-time.
Panos Kampanakis, Shai Halevi, Nevine Ebeid, Matt Campagna
AES-GCM has been the status quo for efficient symmetric encryption for decades. As technology and cryptographic applications evolved over time, AES-GCM has posed some challenges to certain use-cases due to its default 96-bit nonce size, 128-bit block size, and lack of key commitment. Nonce-derived schemes are one way of addressing these challenges: Such schemes derive multiple keys from nonce values, then apply standard AES-GCM with the derived keys (and possibly another 96-bit nonce). The approach overcomes the nonce length and data limit issues since each derived key is only used to encrypt a few messages. By itself, the use of nonce-derived keys does not address key commitment, however. Some schemes chose to include a built-in key commitment mechanism, while others left it out of scope.
In this work, we explore efficient key commitment methods that can be added to any nonce-derived scheme in a black-box manner. Our focus is on options that use the underlying block cipher and no other primitive, are efficient, and only use standard primitives which are FIPS-approved. For concreteness we focus here specifically on adding key commitment to XAES-256-GCM, a nonce-scheme originally proposed by Filippo Valsorda, but these methods can be adapted to any other nonce-derived scheme. We propose an efficient CMAC-based key commitment solution, and prove its security in the ideal-cipher model. We argue that adding this solution yields a FIPS-compliant mode, quantify the data and message length limits of this mode and compare this combination to other nonce-derived modes. We also benchmark our key committing XAES-256-GCM performance.
In this work, we explore efficient key commitment methods that can be added to any nonce-derived scheme in a black-box manner. Our focus is on options that use the underlying block cipher and no other primitive, are efficient, and only use standard primitives which are FIPS-approved. For concreteness we focus here specifically on adding key commitment to XAES-256-GCM, a nonce-scheme originally proposed by Filippo Valsorda, but these methods can be adapted to any other nonce-derived scheme. We propose an efficient CMAC-based key commitment solution, and prove its security in the ideal-cipher model. We argue that adding this solution yields a FIPS-compliant mode, quantify the data and message length limits of this mode and compare this combination to other nonce-derived modes. We also benchmark our key committing XAES-256-GCM performance.
Pascal Giorgi, Fabien Laguillaumie, Lucas Ottow, Damien Vergnaud
Threshold public-key encryption securely distributes private key shares among multiple participants, requiring a minimum number of them to decrypt messages. We introduce a quantum-resistant threshold public-key encryption scheme based on the code-based Niederreiter cryptosystem that achieves security against chosen ciphertext attacks. A previous attempt was made recently by Takahashi, Hashimoto, and Ogata (published at DCC in 2023) but we show that it contains a critical security flaw that allow adversaries to exploit malformed ciphertexts to gain information about the secret key.
In this work, we formalize a generic conversion enhancing security of (classical) public-key encryption from one-wayness against passive attacks to indistinguishability against chosen-ciphertext attacks. The conversion uses a non-interactive zero-knowledge argument with strong security properties to ensure ciphertext well-formedness. We then provide an instantiation for Niederreiter encryption based on recent techniques introduced in the "MPC-in-the-head" paradigm. The publicly verifiable validity of ciphertexts makes this scheme suitable for threshold public-key encryption and prevents an attack similar to the one on Takahashi-Hashimoto-Ogata scheme. To improve the multi-party computation protocol for decryption (involving secure computations on polynomials), we introduce a field-switching technique that allows to significantly reduce the shared secret key size and computational overhead.
In this work, we formalize a generic conversion enhancing security of (classical) public-key encryption from one-wayness against passive attacks to indistinguishability against chosen-ciphertext attacks. The conversion uses a non-interactive zero-knowledge argument with strong security properties to ensure ciphertext well-formedness. We then provide an instantiation for Niederreiter encryption based on recent techniques introduced in the "MPC-in-the-head" paradigm. The publicly verifiable validity of ciphertexts makes this scheme suitable for threshold public-key encryption and prevents an attack similar to the one on Takahashi-Hashimoto-Ogata scheme. To improve the multi-party computation protocol for decryption (involving secure computations on polynomials), we introduce a field-switching technique that allows to significantly reduce the shared secret key size and computational overhead.
Xue Yang, Ruida Wang, Depan Peng, Kun Liu, Xianhui Lu, Xiaohu Tang
This work addresses the hintless single-server Private Information Retrieval (PIR) from the perspective of high-level protocol design and introduces PIRCOR and PIRCOR$^{*}$ that outperform the state-of-the-art PIRANA (Liu et. al., IEEE S&P 2024) and YPIR (Menon and Wu, USENIX Security 2024) in terms of the query size and the query generation time. In PIRCOR, we construct an efficient Rotation-based Expanded Binary Code (REBC) to expand $\alpha$ primary codewords into $\beta$ expanded codewords by the Rotation-Mutual-Multiplication operation. By leveraging the innovative REBC, PIRCOR reduces the query size for single-query PIR by a factor of $\mathcal{O}\left(N^{\frac{\delta-1}{\delta}}\right)$ compared to PIRANA, while also avoiding the $\mathcal{O}(N +\frac{|\mathrm{DB}|}{N})$ linear scaling inherent in YPIR ($N$, $\delta$ and $|\mathrm{DB}|$ are the (R)LWE secret dimension, the number of codewords with a Hamming weight of $1$ and the number of database elements). Based on PIRCOR, we further present PIRCOR$^{*}$ by additionally introducing the Rotation-self-Multiplication operation, which achieves a $\mathbf{50\%}$ reduction in rotation operations and a smaller query size when $\delta = 2$.
Building upon PIRCOR and PIRCOR$^{*}$, we further propose their optimized variants, PIRCOR-op and PIRCOR$^{*}$-op, to further reduce the online response time. Similar to YPIR that leverage pre-processing, PIRCOR-op and PIRCOR$^{*}$-op allow all rotations and part of multiplications to be carried out in an offline stage before receiving the query. Additionally, we also design FHE-operator acceleration with leveled optimization and implementation optimization of ciphertext rotation.
For 8 KB element retrieval in an 8 GB database, PIRCOR achieves a $\mathbf{10.7\times}$ query size reduction compared to PIRANA. When benchmarked against YPIR, the improvements are even more striking: PIRCOR reduces the query size by $\mathbf{26.8\times}$ and accelerates query generation by a staggering $\mathbf{6,080\times}$. Notably, the enhanced PIRCOR$^{*}$ achieves a $\mathbf{53.6\times}$ reduction in query size compared to YPIR, while improving query generation time by an impressive $\mathbf{12,160\times}$.
Zhengjun Cao, Lihua Liu
We show that the data aggregation scheme [IEEE TDSC, 2023, 20(3), 2011-2024] is flawed, because the signer only signs a part of data, not the whole data. An adversary can replace the unsigned component to cheat the verifier. To frustrate this attack, all components of the target data should be concatenated together and then be hashed and signed, so as to ensure that the signature verification can prove the whole message integrity.
Vasyl Ustimenko, Tymoteusz Chojecki
Let us assume that one of two trusted parties (administrator) manages the information system (IS) and another one (user) is going to use the resources of this IS during the certain time interval. So they need establish secure user’s access password to the IS resources of this system via selected authenticated key exchange protocol. So they need to communicate via insecure communication channel and secretly con-struct a cryptographically strong session key that can serve for the establishment of secure passwords in the form of tuples in certain alphabet during the certain time interval. Nowadays selected protocol has to be postquantum secure. We propose the implementation of this scheme in terms of Symbolic Computa-tions. The key exchange protocol is one of the key exchange algorithms of Noncommutative Cryptography with the platform of multivariate transformation of the affine space over selected finite commutative ring. The session key is a multivariate map on the affine space. Platforms and multivariate maps are construct-ed in terms of Algebraic Graph Theory.
28 April 2025
Benedikt Bünz, Alessandro Chiesa, Giacomo Fenzi, William Wang
Proof-carrying data (PCD) is a powerful cryptographic primitive for computational integrity in a distributed setting. State-of-the-art constructions of PCD are based on accumulation schemes (and, closely related, folding schemes).
We present WARP, the first accumulation scheme with linear prover time and logarithmic verifier time. Our scheme is hash-based (secure in the random oracle model), plausibly post-quantum secure, and supports unbounded accumulation depth.
We achieve our result by constructing an interactive oracle reduction of proximity that works with any linear code over a sufficiently large field. We take a novel approach by constructing a straightline extractor that relies on erasure correction, rather than error-tolerant decoding
like prior extractors. Along the way, we introduce a variant of straightline round-by-round knowledge soundness that is compatible with our extraction strategy.
Gulshan Kumar, Rahul Saha, Mauro Conti, William J Buchanan
Smart contracts are integral to decentralized systems like blockchains and enable the automation of processes through programmable conditions. However, their immutability, once deployed, poses challenges when addressing errors or bugs. Existing solutions, such as proxy contracts, facilitate upgrades while preserving application integrity. Yet, proxy contracts bring issues such as storage constraints and proxy selector clashes - along with complex inheritance management. This paper introduces a novel upgradeable smart contract framework with version control, named "decentraLized vErsion control and updAte manaGement in upgrAdeable smart coNtracts (LEAGAN)." LEAGAN is the first decentralized updatable smart contract framework that employs data separation with Incremental Hash (IH) and Revision Control System (RCS). It updates multiple contract versions without starting anew for each update, and reduces time complexity, and where RCS optimizes space utilization through differentiated version control. LEAGAN also introduces the first status contract in upgradeable smart contracts, and which reduces overhead while maintaining immutability. In Ethereum Virtual Machine (EVM) experiments, LEAGAN shows 40\% better space utilization, 30\% improved time complexity, and 25\% lower gas consumption compared to state-of-the-art models. It thus stands as a promising solution for enhancing blockchain system efficiency.
Eyal Kushnir, Hayim Shaul
Range counting is the problem of preprocessing a set $P\subset R^d$ of $n$ points, such that given a query range $\gamma$ we can efficiently compute $|P\cap\gamma|$. In the more general range searching problem the goal is to compute $f(P\cap\gamma)$, for some function $f$.
It was already shown (Kushnir et al. PETS'24) how to efficiently answer a range searching query under FHE using a technique they called Copy-and-Recurse to traverse partition trees.
In the Range emptiness problem the goal is to compute only whether $P\cap\gamma =\emptyset$. This was shown (in plaintext) to be done more efficiently. Range emptiness is interesting on its own and also used as a building block in other algorithms.
In this paper we improve and extend the results of Kushnir et al. First, for range searching we reduce the overhead term to the optimal $O(n)$, so for example if the ranges are halfspaces in $R^d$ bounded by hyperplanes then range searching can be done with a circuit of size $O(t\cdot n^{1-1/d+\varepsilon}+n)$, where $t$ is the size of the sub-circuit that checks whether a point lies under a hyperplane.
Second, we introduce a variation of copy-and-recurse that we call leveled copy-and-recurse. With this variation we improve range searching in the 1-dimensional case as well as traversal of other trees (e.g., binary trees and B-trees). Third, we show how to answer range emptiness queries under FHE more efficiently than range counting.
We implemented our algorithms and show that our techniques for range emptiness yield a solution that is $\times 3.6$ faster than the previous results for a database of $2^{25}$ points.
It was already shown (Kushnir et al. PETS'24) how to efficiently answer a range searching query under FHE using a technique they called Copy-and-Recurse to traverse partition trees.
In the Range emptiness problem the goal is to compute only whether $P\cap\gamma =\emptyset$. This was shown (in plaintext) to be done more efficiently. Range emptiness is interesting on its own and also used as a building block in other algorithms.
In this paper we improve and extend the results of Kushnir et al. First, for range searching we reduce the overhead term to the optimal $O(n)$, so for example if the ranges are halfspaces in $R^d$ bounded by hyperplanes then range searching can be done with a circuit of size $O(t\cdot n^{1-1/d+\varepsilon}+n)$, where $t$ is the size of the sub-circuit that checks whether a point lies under a hyperplane.
Second, we introduce a variation of copy-and-recurse that we call leveled copy-and-recurse. With this variation we improve range searching in the 1-dimensional case as well as traversal of other trees (e.g., binary trees and B-trees). Third, we show how to answer range emptiness queries under FHE more efficiently than range counting.
We implemented our algorithms and show that our techniques for range emptiness yield a solution that is $\times 3.6$ faster than the previous results for a database of $2^{25}$ points.
Gustaf Ahlgren, Onur Gunlu
Secure rate-distortion-perception (RDP) trade-offs arise in critical applications, such as semantic compression and privacy-preserving generative coding, where preserving perceptual quality while minimizing distortion is vital. This paper studies a framework for secure RDP over noiseless and noisy broadcast channels under strong secrecy constraints. We first characterize the exact secure RDP region for noiseless transmission channels. We then develop an inner bound on the secure RDP region for a memoryless broadcast channel with correlated noise components at the receivers' observations and prove its tightness under a more capable broadcast channel assumption. Our results demonstrate how optimized binning schemes simultaneously achieve high perceptual quality, low distortion, and strong secrecy, illuminating fundamental information-theoretic limits for next-generation trustworthy computation systems.