International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

05 June 2025

Linghe Yang, Jian Liu, Jingyi Cui, Guangquan Xu, Yude Bai, Wei Wang
ePrint Report ePrint Report
Distributed Randomness Beacons (DRBs) provide secure, unbiased random numbers for decentralized systems. However, existing protocols face critical limitations. Most rely on cryptographic assumptions which are vulnerable to quantum attacks, risking long-term security in asynchronous networks where unbounded delays may allow attackers time to exploit these weaknesses. Many achieve low beacon generation rates, often below 100 beacons per minute in moderate-scale networks (e.g., Spurt IEEE S&P’22), hindering their use in applications requiring high-throughput randomness. Additionally, traditional Verifiable Secret Sharing (VSS)-based DRBs, using a share-consensus-reconstruct paradigm, are unsuitable for asynchronous networks due to circular dependencies between beacon generation and consensus. Given these limitations, we propose Rubato, the first provably post-quantum secure DRB for asynchronous environments, incorporating a lattice-based batched Asynchronous Verifiable Secret Sharing scheme (bAVSS-PQ). Rubato supports batching of $\mathcal{O}(\lambda^2)$ secrets with communication complexity $\mathcal{O}(\lambda n^3 \log n)$ and tolerates Byzantine faults in up to one-third of the nodes. Integrated with DAG-based consensus protocols like Bullshark or Tusk, its epoch-staggered architecture resolves circular dependencies, enabling efficient and secure randomness generation. Evaluations across 10 to 50 nodes show Rubato generates 5200 to 350 beacons per minute with per-beacon latencies of 11.60 to 96.37 milliseconds, achieving a consensus throughput of 186,088 transactions per second with a latency of 16.78 seconds at 30 nodes. Rubato offers robust post-quantum security and high performance for small-to-medium-scale decentralized systems.
Expand
Mahdi Soleimani, Grace Jia, Anurag Khandelwal
ePrint Report ePrint Report
Many distributed analytics applications that are offloaded to the cloud operate on sensitive data. Even when the computations for such analytics workloads are confined to trusted hardware enclaves and all stored data and network communications are encrypted, several studies have shown that they are still vulnerable to access pattern attacks. Prior efforts towards preventing access pattern leakage often incur network and compute overheads that are logarithmic in dataset size, while also limiting the functionality of supported analytics jobs.

We present Weave, an efficient, expressive, and secure analytics platform that scales to large datasets. Weaveemploys a combination of noise injection and hardware memory isolation via enclave page caches to reduce the network and compute overheads for oblivious analytics to a constant factor. Weave also employs several optimizations and extensions that exploit dataset and workload-specific properties to ensure performance at scale without compromising on functionality. Our evaluations show that Weave reduces the end-to-end execution time for a wide range of analytics jobs on large real-world datasets by $4$--$10\times$ compared to prior state-of-the-art while providing strong obliviousness guarantees.
Expand
Hoeteck Wee, David J. Wu
ePrint Report ePrint Report
We construct distributed broadcast encryption and registered attribute-based encryption (ABE) that support an arbitrary polynomial of users from the succinct LWE assumption. Specifically, if we take $\lambda$ to be the security parameter and $N$ to be the number of users, we obtain the following:

* We obtain a distributed broadcast encryption scheme where the size of the public parameters, user public/secret keys, and ciphertexts are optimal (i.e., have size $\mathsf{poly}(\lambda, \log N)$). Security relies on the $\mathsf{poly}(\lambda, \log N)$-succinct LWE assumption. Previously, this was only known from indistinguishability obfuscation or witness encryption. All constructions that did not rely on these general tools could only support an a priori bounded number of users.

* We obtain a key-policy registered ABE scheme that supports arbitrary bounded-depth Boolean circuit policies from the $\mathsf{poly}(\lambda, d, \log N)$-succinct LWE assumption in the random oracle model, where $d$ is the depth of the circuit computing the policy. The public parameters, user public/secret keys, and ciphertexts have size $\mathsf{poly}(\lambda, d, \log N)$, which are optimal up to the $\mathsf{poly}(d)$ factor. This is the first registered ABE scheme with nearly-optimal parameters. All previous schemes (including constructions based on indistinguishability obfuscation, witness encryption, or evasive LWE) either have ciphertexts that scale with the policy size and attribute length, or can only support a bounded number of users (with long public parameters and public keys that scale with the number of users).
Expand
Tejas Sharma, Ashish Kundu
ePrint Report ePrint Report
Random numbers are often used in cryptography algorithms, protocols, and in several security and non-security applications. Such us- ages often apply Arithmetic and Boolean operations on pseudorandom numbers, such as addition, XOR, NOT, bit shifts, and other operations, in order to achieve the desired amount of entropy and desired level of security. In this paper, we have reviewed, studied, and analyzed the se- curity properties of these operations on random numbers: do Arithmetic and Boolean operations and other related operations on cryptograph- ically secure pseudorandom numbers lead to cryptographically secure pseudorandom numbers; do they lead to loss of preservation of entropy?
Expand
Yunqing Sun, Hanlin Liu, Kang Yang, Yu Yu, Xiao Wang, Chenkai Weng
ePrint Report ePrint Report
We introduce the notion of committed vector oblivious linear evaluation (C-VOLE), which allows a party holding a pre-committed vector to generate VOLE correlations with multiple parties on the committed value. It is a unifying tool that can be found useful in zero-knowledge proofs (ZKPs) of committed values, actively secure multi-party computation, private set intersection (PSI), etc.

To achieve the best efficiency, we design a tailored commitment scheme and matching C-VOLE protocols, both based on the learning parity with noise assumption. In particular, exploiting the structures of the carefully designed LPN-based commitment minimizes the cost of ensuring consistency between the committed vector and VOLE correlation. As a result, we achieve a 28$\times$ improvement over the protocol proposed in prior work (Usenix 2021) that uses ZKP to prove the correct opening of the commitment. We also apply C-VOLE to design a PSI protocol that allows one server to run PSI repeatedly with multiple clients while ensuring that the same set is used across all executions. Compared with the state-of-the-art PSI (CCS 2024) with similar security requirements, our protocol reduces the communication overhead by a factor of 35$\times$.
Expand
Mingyu Gao, Hongren Zheng
ePrint Report ePrint Report
Homomorphic encryption schemes based on the Ring-Learning-with-Errors problem require accurate ciphertext noise analysis to ensure correctness and security. However, ring multiplications during homomorphic computations make the noise in the result ciphertexts difficult to characterize. Existing average-case noise analyses derive a bound on the noise by either assuming it follows a Gaussian distribution, or giving empirical formulae, with strong independence assumption and the Central Limit Theorem extensively applied. In this work, we question the validity of these methods, by showing that the noise exhibits a heavy-tailed distribution via exact calculation of its variance and kurtosis, for both independent and dependent noises. The heavy-tailedness suggests the failing probability of bounds derived from these methods may not be negligible, and we experimentally demonstrate several cases where the noise growth is underestimated.
Expand
Benedikt Auerbach, Miguel Cueto Noval, Boran Erol, Krzysztof Pietrzak
ePrint Report ePrint Report
Continuous Group Key Agreement (CGKA) is the primitive underlying secure group messaging. It allows a large group of $N$ users to maintain a shared secret key that is frequently rotated by the group members in order to achieve forward secrecy and post compromise security. The group messaging scheme Messaging Layer Security (MLS) standardized by the IETF makes use of a CGKA called TreeKEM which arranges the $N$ group members in a binary tree. Here, each node is associated with a public-key, each user is assigned one of the leaves, and a user knows the corresponding secret keys from their leaf to the root. To update the key material known to them, a user must just replace keys at $\log(N)$ nodes, which requires them to create and upload $\log(N)$ ciphertexts. Such updates must be processed sequentially by all users, which for large groups is impractical. To allow for concurrent updates, TreeKEM uses the ``propose and commit'' paradigm, where multiple users can concurrently propose to update (by just sampling a fresh leaf key), and a single user can then commit to all proposals at once.

Unfortunately, this process destroys the binary tree structure as the tree gets pruned and some nodes must be ``blanked'' at the cost of increasing the in-degree of others, which makes the commit operation, as well as, future commits more costly. In the worst case, the update cost (in terms of uploaded ciphertexts) per user can grow from $\log(N)$ to $\Omega(N)$.

In this work we provide two main contributions. First, we show that MLS' communication complexity is bad not only in the worst case but also if the proposers and committers are chosen at random: even if there's just one update proposal for every commit the expected cost is already over $\sqrt{N}$, and it approaches $N$ as this ratio changes towards more proposals.

Our second contribution is a new variant of propose and commit for TreeKEM which for moderate amounts of update proposals per commit provably achieves an update cost of $\Theta(\log(N))$ assuming the proposers and committers are chosen at random.
Expand
Liangrong Zhao, Hans Schmiedel, Qin Wang, Jiangshan Yu
ePrint Report ePrint Report
Asynchronous common subset (ACS) has been extensively studied since the asynchronous Byzantine fault tolerance (BFT) framework was introduced by Ben-Or, Kemler, and Rabin (BKR). The line of work (i.e., HoneyBadgerBFT, BEAT, EPIC) uses parallel reliable broadcast (RBC) and asynchronous binary agreement (ABA) instances to reach an agreement on a subset of proposed transactions.

In this paper, we further progress the BKR paradigm by presenting Janus, the first hybrid ACS protocol leveraging trusted hardware components. Janus is the first ACS protocol that tolerates a minority of Byzantine processes and that has O(n^2) message complexity. Supported by trusted hardware components, we introduce a provable broadcast primitive to replace RBC, and develop a resilient binary agreement protocol. Messages for concurrent instances of agreement are aggregated into vectors. Our experimental results demonstrate significant performance improvements over predominant ACS constructions with a 92%+ increase compared to HoneyBadgerBFT and a 47%+ increase compared to BEAT. Additionally, we provide a comparison with open-source hybrid BFT protocols that operate under a partially synchronous network, highlighting the performance enhancement compared to previous hybrid protocols that also tolerate the Byzantine minority (e.g., MinBFT and Damysus, by 49%+).
Expand
Liangrong Zhao, Jérémie Decouchant, Joseph K. Liu, Qinghua Lu, Jiangshan Yu
ePrint Report ePrint Report
Byzantine Fault Tolerance (BFT) Consensus protocols with trusted hardware assistance have been extensively explored for their improved resilience to tolerate more faulty processes. Nonetheless, the potential of trust hardware has been scarcely investigated in leaderless BFT protocols. RedBelly is assumed to be the first blockchain network whose consensus is based on a truly leaderless BFT algorithm. This paper proposes a trusted hardware-assisted leaderless BFT consensus protocol by offering a hybrid solution for the set BFT problem defined in the RedBelly blockchain. Drawing on previous studies, we present two crucial trusted services: the counter and the collector. Based on these two services, we introduce two primitives to formulate our leaderless BFT protocol: a hybrid verified broadcast (VRB) protocol and a hybrid binary agreement. The hybrid VRB protocol enhances the hybrid reliable broadcast protocol by integrating a verification function. This addition ensures that a broadcast message is verified not only for authentication but also for the correctness of its content. Our hybrid BFT consensus is integrated with these broadcast protocols to deliver binary decisions on all proposals. We prove the correctness of the proposed hybrid protocol and demonstrate its enhanced performance in comparison to the prior trusted BFT protocol.
Expand

04 June 2025

Junru Li, Yifan Song
ePrint Report ePrint Report
In this work, we consider secure multiparty computation (MPC) in the asynchronous network setting. MPC allows $n$ parties to compute a public function on their private inputs against an adversary corrupting at most $t$ of them. We consider both communication complexity and round complexity of asynchronous MPC (AMPC) with the optimal resilience $n=3t+1$.

Without fully homomorphic encryptions, the best-known result in this setting is achieved by Coretti, Garay, Hirt, and Zikas (ASIACRYPT 2016), which requires $O(|C|n^3\kappa)$ bits of communication assuming one-way functions, where $\kappa$ is the security parameter. On the other hand, the best-known non-constant-round AMPC by Goyal, Liu, and Song (CRYPTO 2024) can achieve $O(|C|n)$ communication even in the information-theoretic setting. In this work, we give the first construction of a constant-round AMPC with $O(|C|n\kappa)$ bits of communication that achieves malicious security with abort assuming random oracles. We provide new techniques for adapting the MPC-in-the-head framework in the asynchronous network to compute a constant-size garbled circuit.
Expand
Chengcheng Chang, Meiqin Wang, Wei Wang, Kai Hu
ePrint Report ePrint Report
\gift, including \gift-64 and \gift-128, is a family of lightweight block ciphers with outstanding implementation performance and high security, which is a popular underlying primitive chosen by many AEADs such as \sundae. Currently, differential cryptanalysis is the best key-recovery attack on both ciphers, but they have stuck at 21 and 27 rounds for \gift-64 and \gift-128, respectively. Recently, Beyne and Rijmen proposed the quasidifferential transition matrix for differential cryptanalysis at CRYPTO 2022 and showed that the fixed-key probability of a differential (characteristic) can be expressed as the sum of correlations of all quasidifferential trails corresponding to this differential (characteristic). As pointed out by Beyne and Rijmen in their paper, the quasidifferential methodology is useful in identifying weak-key differential attacks. In this paper, we apply Beyne and Rijmen's method to \gift. Some differential characteristics with small (average) probabilities can have much larger probabilities when weak-key conditions hold. Improved weak-key differential attacks on \gift-64 and \gift-128 are thus obtained. For \gift-64, the probability of a 13-round differential is improved from $2^{-62.06}$ to $2^{-57.82}$ with 4 bits of weak-key conditions, then an improved differential key-recovery attack on 21-round \gift-64 is obtained with $2^{117.42}/2^{64}$ time/data complexities; the probability of a 13-round multiple differential (containing 33 characteristics) is improved from $2^{-58.96}$ to $2^{-55.67}$ with 4 bits of weak-key conditions, then an improved multiple differential key-recovery attack on 21-round \gift-64 is obtained with $2^{123.27}/2^{64}$ time/data complexities. For \gift-128, the probability of a 20-round differential is improved from $2^{-121.83}$ to $2^{-114.77}$ with 6 bits of weak-key conditions; the probability of a 21-round multiple differential (containing 2 differentials) is improved from $2^{-128.38}$ to $2^{-122.77}$ with 4 bits of weak-key conditions. Improved (multiple) differential weak-key key-recovery attacks are obtained for 27 and 28 rounds of \gift-128 with $2^{115.77}$/$2^{115.77}$ and $2^{123.77}$/$2^{123.77}$ time/data complexities, respectively. As far as we know, this is the first time that a (weak-key) key-recovery attack can reach 28 rounds of \gift-128. Additionally, as an independent interest, we perform the first differential attack on \sundae. The differential used in this attack is checked with quasidifferential trails, thus the probability is reliable. Our attack is nonce-respecting and has significantly better complexities than the currently best attack.
Expand
Input-Output Global
Job Posting Job Posting
What the role involves:

Cryptographic Engineers contribute to design, implementation, and integration of secure cryptographic protocols across Cardano-related initiatives, including Cardano Core Cryptographic Primitives, Mithril, ALBA, Leios, etc. This role bridges applied research and engineering, focusing on translating cutting-edge cryptographic designs into robust, production-grade systems. The cryptography engineer will collaborate closely with researchers, protocol designers, software architects, product managers, and QA teams to ensure cryptographic correctness, performance, and system alignment. A strong emphasis is placed on high assurance coding, cryptographic soundness, and practical deployment readiness.

  • Work independently and collaborate with distributed teams across multiple time zones, showing initiative and ownership over tasks
  • Design and implement cryptographic constructions, including digital signatures, zero-knowledge proofs, accumulators, commitment schemes
  • Work independently on software development tasks, demonstrating proactive problem-solving skills
  • Develop and maintain cryptographic libraries (primarily in Rust and Haskell, occasionally in C) with an emphasis on safety, performance, clarity, and auditability
  • Translate advanced cryptographic concepts from academic research into well-structured, reliable implementations that will be used in production systems
  • Contribute to cryptographic design discussions, parameter tuning, performance benchmarking, particularly for elliptic curve and zk-based constructions
  • Analyze and validate protocol security, ensuring soundness, liveness, & resistance to practical adversaries
  • Collaborate with researchers, software architects, & formal methods specialists to align implementation with protocol specifications
  • Write & maintain clear documentation, including developer guides and internal design notes
  • Troubleshoot, debug, optimize cryptographic code & its interactions w/ broader systems
  • Keep up to date with recent cryptographic advancements & assess their relevance to the project
  • Closing date for applications:

    Contact: Marios Nicolaides

    More information: https://apply.workable.com/io-global/j/70FC5D8A0C/

    Expand

    03 June 2025

    Rutchathon Chairattana-Apirom, Nico Döttling, Anna Lysyanskaya, Stefano Tessaro
    ePrint Report ePrint Report
    Anonymous rate-limited tokens are a special type of credential that can be used to improve the efficiency of privacy-preserving authentication systems like Privacy Pass. In such a scheme, a user obtains a "token dispenser" by interacting with an issuer, and the dispenser allows the user to create up to a pre-determined number $k$ of unlinkable and publicly verifiable tokens. Unlinkable means that one should not be able to tell that two tokens originate from the same dispenser, but also they cannot be linked to the interaction that generated the dispenser. Furthermore, we can limit the rate at which these tokens are created by linking each token to a context (e.g., the service we are authenticating to), and imposing a limit $N \leq k$ such that seeing more than $N$ tokens for the same context will reveal the identity of the user. Constructions of such tokens were first given by Camenisch, Hohenberger and Lysyanskaya (EUROCRYPT '05) and Camenisch, Hohenberger, Kohlweiss, Lysyanskaya, and Meyerovich (CCS '06).

    In this work, we present the first construction of \emph{everlasting} anonymous rate-limited tokens, for which unlinkability holds against computationally unbounded adversaries, whereas other security properties (e.g., unforgeability) remain computational. Our construction relies on pairings. While several parameters in our construction unavoidably grow with $k$, the key challenge we resolve is ensuring that the complexity of dispensing a token is independent of the parameter $k$.

    We are motivated here by the goal of providing solutions that are robust to potential future quantum attacks against the anonymity of previously stored tokens. A construction based on post-quantum secure assumptions (e.g., based on lattices) would be rather inefficient---instead, we take a pragmatic approach dispensing with post-quantum security for properties not related to privacy.
    Expand
    Shuo Peng, Kai Hu, Jiahui He, Meiqin Wang
    ePrint Report ePrint Report
    Ascon, a family of algorithms that support hashing and Authenticated Encryption with Associated Data (AEAD), is the final winner of the NIST Lightweight Cryptography Project. As a research hotspot, Ascon has received substantial third-party security evaluation. Among all the results of Ascon-128 (the primary recommendation of AEAD), the key recovery attack can only be achieved by reducing the initialization phase to 7 rounds or fewer, regardless of whether it violates the security claims made by the designers (i.e., misuse of the nonce or exceeding data limits $2^{64}$). In this paper, we, from two aspects (misuse-free setting and misused setting), improve the key recovery attack on Ascon-128 using the cube attack method. In one part, we present a faster method to recover the superpolies for a 64-dimensional cube in the output bits of the 7-round initialization, enabling us to recover the secret key with a time complexity of $2^{95.96}$ and a data complexity of $2^{64}$. Our 7-round key recovery attack, based on the full key space, greatly improves the time complexity, making it the best result to date. Additionally, we utilize several techniques to extend state recovery to key recovery, answering the open problem of transitioning from full state recovery in the encryption phase to key recovery for Ascon-128 (ToSc Vol 4, 2022). By combining encryption phase state recovery with initialization phase key recovery, we can achieve 8-round and 9-round initialization phase key recovery in the nonce misuse scenario, with time complexities of $2^{101}$ and $2^{123.92}$, respectively. This represents an improvement of two rounds over previous results in the misused setting. Our first key recovery attack is also applicable to Ascon-128a, achieving the same result. In cases where the full state, prior to the encryption phase, can be recovered in other Ascon AEAD modes, our second key recovery attack will also be useful. It is worth noting that this work does not threaten the security of the full 12 rounds Ascon, but we expect that our results provide new insights into the security of Ascon.
    Expand
    Matilda Backendal, David Balbás, Miro Haller
    ePrint Report ePrint Report
    End-to-end encryption allows data to be outsourced and stored on an untrusted server, such as in the cloud, without compromising data privacy. In the setting when this data is shared between a group of users, members also all share access to the same static key material used for data encryption. When the group membership changes, access control is only enforced by the server: security breaches or compelled disclosure would allow even a removed member to decrypt the current shared data.

    We propose to move away from static keys and instead use a group key progression (GKP) scheme, a novel primitive that enables a dynamic group of users to agree on a persistent sequence of keys while keeping a compact local state. GKP ensures that group members can only derive keys within a certain interval of the sequence, a notion that we call interval access control (IAC), and also provide post-compromise security. Our GKP construction, called Grappa, combines continuous group key agreement (CGKA, by Alwen et al., 2020) with a new abstraction called interval scheme. The latter is a symmetric-key primitive that can derive a sequence of keys from a compact state while preserving IAC. We explore different interval scheme constructions and simulate their storage and communication costs when used in group settings. The most efficient of them is a generalization of dual key regression (Shafagh et al., 2020), which we formalize and prove secure. Overall, our protocols offer a practical and robust solution to protect persistent data shared by a group.
    Expand
    Andrew Huang, Yael Tauman Kalai
    ePrint Report ePrint Report
    In this work, we show that parallel repetition of public-coin interactive arguments reduces the soundness error at an exponential rate even in the post-quantum setting. Moreover, we generalize this result to hold for threshold verifiers, where the parallel repeated verifier accepts if and only if at least $t$ of the executions are accepted (for some threshold $t$). Prior to this work, these results were known only when the cheating prover was assumed to be classical.

    We also prove a similar result for three-message private-coin arguments. Previously, Bostanci, Qian, Spooner, and Yuen (STOC 2024) proved such a parallel repetition result in the more general setting of quantum protocols, where the verifier and communication may be quantum. We consider only protocols where the verifier is classical, but obtain a simplified analysis, and for the more general setting of threshold verifiers.
    Expand
    Sanjam Garg, Aarushi Goel, Abhishek Jain, Bhaskar Roberts, Sruthi Sekar
    ePrint Report ePrint Report
    Collaborative zk-SNARKs (Ozdemir and Boneh, USENIX’22) are a multiparty variant of zk-SNARKs where multiple, mutually distrustful provers, each holding a private input, jointly compute a zk-SNARK using their combined inputs. A sequence of works has proposed efficient constructions of collaborative zk-SNARKs using a common template that involves designing secure multiparty computation (MPC) protocols to emulate a zk-SNARK prover without making non-black-box use of cryptography. To achieve security against malicious adversaries, these works adopt compilers from the MPC literature that transform semi-honest MPC into malicious-secure MPC.

    In this work, we revisit this design template. • Pitfalls: We demonstrate two pitfalls in the template, which can lead to a loss of input privacy. We first show that it is possible to compute collaborative proofs on invalid witnesses, which in turn can leak the inputs of honest provers. Next, we show that using state-of-the-art malicious security compilers as-is for proof computation is insecure, in general. Finally, we discuss mitigation strategies. • Malicious Security Essentially for Free: As our main technical result, we show that in the honest-majority setting, one can forego malicious security checks performed by state-of-the-art malicious security compilers during collaborative proof generation of several widely used zk-SNARKs. In other words, we can avoid the overheads of malicious security compilers, enabling faster proof generation.

    To the best of our knowledge, this is the first example of non-trivial computations where semi-honest MPC protocols achieve malicious security. The observations underlying our positive results are general and may have applications beyond collaborative zkSNARKs.
    Expand

    02 June 2025

    Olive Franzese, Congyu Fang, Radhika Garg, Somesh Jha, Nicolas Papernot, Xiao Wang, Adam Dziedzic
    ePrint Report ePrint Report
    Differentially private stochastic gradient descent (DP-SGD) trains machine learning (ML) models with formal privacy guarantees for the training set by adding random noise to gradient updates. In collaborative learning (CL), where multiple parties jointly train a model, noise addition occurs either (i) before or (ii) during secure gradient aggregation. The first option is deployed in distributed DP methods, which require greater amounts of total noise to achieve security, resulting in degraded model utility. The second approach preserves model utility but requires a secure multiparty computation (MPC) protocol. Existing methods for MPC noise generation require tens to hundreds of seconds of runtime per noise sample because of the number of parties involved. This makes them impractical for collaborative learning, which often requires thousands or more samples of noise in each training step.

    We present a novel protocol for MPC noise sampling tailored to the collaborative learning setting. It works by constructing an approximation of the distribution of interest which can be efficiently sampled by a series of table lookups. Our method achieves significant runtime improvements and requires much less communication compared to previous work, especially at higher numbers of parties. It is also highly flexible – while previous MPC sampling methods tend to be optimized for specific distributions, we prove that our method can generically sample noise from statistically close approximations of arbitrary discrete distributions. This makes it compatible with a wide variety of DP mechanisms. Our experiments demonstrate the efficiency and utility of our method applied to a discrete Gaussian mechanism for differentially private collaborative learning. For 16 parties, we achieve a runtime of 0.06 seconds and 11.59 MB total communication per sample, a 230× runtime improvement and 3× less communication compared to the prior state-of-the-art for sampling from discrete Gaussian distribution in MPC.
    Expand
    Piotr Mikołajczyk, Parisa Hassanizadeh, Shahriar Ebrahimi
    ePrint Report ePrint Report
    As generative models continue to evolve, verifying the authenticity, provenance, and integrity of digital media has become increasingly critical—particularly for domains like journalism, digital art, and scientific documentation. In this work, we present a decentralized verifiable media ecosystem for managing, verifying, and transacting authentic digital media using zero-knowledge proofs (ZKPs). Building on VIMz (Dziembowski et al., PETS'25), we extend the framework in three key directions. First, we generalize the model to support arbitrary image regions to achieve selective transformations support such as redaction and regional blurring—features commonly required in privacy-preserving applications. Second, we introduce performance optimizations that yield up to an 18% improvement in off-chain proof generation, and enhance the framework to support cost-efficient on-chain verification. Third, we design and implement a modular smart contract architecture to support a wide range of decentralized media applications. As a flagship use case, we develop a decentralized media marketplace that enables permissionless licensing, ownership transfer, and verifiable attribution. In this setting, users can share transformed media—such as cropped, blurred, or resized previews—alongside ZKPs that prove derivation from a signed original, eliminating the need to trust the seller. Unlike prior fair exchange protocols, which rely on trusted descriptions or encrypted payload delivery, our system enables verifiable public previews and origin-bound proofs without revealing the full content. This approach unlocks new applications beyond marketplaces, including automated infringement dispute resolution and photography contests with verifiable criteria.
    Expand
    Stefan Dziembowski, Shahriar Ebrahimi, Omkar Gavhane, Susil Kumar Mohanty
    ePrint Report ePrint Report
    Payment Channel Networks (PCNs) enhance blockchain scalability by enabling off-chain transactions. However, repeated unidirectional multi-hop payments often cause channel imbalance or depletion, limiting scalability and usability. Existing rebalancing protocols, such as Horcrux [NDSS’25] and Shaduf [NDSS’22], rely on on-chain operations, which hinders efficiency and broad applicability. We propose Universal Channel Rebalancing (UCRb), a blockchain-agnostic, fully off-chain framework that ensures correct behavior among untrusted participants without on-chain interaction. UCRb incorporates the following core innovations: (1) a fair and reliable incentive-compatible mechanism that encourages voluntary user participation in off-chain channel rebalancing, (2) integration of Pedersen commitments to achieve atomic off-chain payments and rebalancing operations, while ensuring balance security, and (3) zero-knowledge proofs to enable privacy-preserving channel initialization and coin shifting, ensuring that user identities and fund allocations remain hidden throughout the process.

    We evaluate UCRb using real-world Lightning Network dataset and compare its performance against state-of-the-art solutions including Horcrux, Shaduf, and Revive [CCS'17]. UCRb exhibits a success ratio enhancement between 15% and 50%, while also reducing the required user deposits by 72%--92%. It maintains an almost negligible rate of channel depletion. Additionally, the long-term performance of UCRb is roughly 1.5 times that of its short-term performance, suggesting that continuous operation leads to improved efficiency. We implement a prototype for UCRb smart contracts and demonstrate its practicality through extensive evaluation. As \texttt{CoinShift} operations require no on-chain interaction, the protocol incurs minimal gas costs. For instance, opening and closing channels with 10 neighbors costs only 130K-160K gas—significantly lower than comparable solutions.
    Expand
    ◄ Previous Next ►