International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

19 September 2025

Raitenhaslach, Germany, 7 September - 11 September 2026
Event Calendar Event Calendar
Event date: 7 September to 11 September 2026
Expand
Saint-Malo, France, 14 April - 16 April 2026
Event Calendar Event Calendar
Event date: 14 April to 16 April 2026
Submission deadline: 31 October 2025
Notification: 12 January 2026
Expand
Nam Tran, Khoa Nguyen, Dongxi Liu, Josef Pieprzyk, Willy Susilo
ePrint Report ePrint Report
The study of lattice-based group signatures has been a prominent research direction since 2010. While recent advances in the field have yielded schemes in the random oracle model with strong security properties and nearly practical efficiency, the current state of affairs for lattice-based group signatures in the standard model is still much less satisfactory. Existing schemes, proposed by Katsumata and Yamada (EUROCRYPT'19) or implied by generic non-interactive zero-knowledge proofs for NP (by Peikert and Shiehian at CRYPTO'19 and by Waters at STOC'24), either only fulfil a weak notion of anonymity called selfless anonymity, or require a strong lattice assumption, or suffer from extremely large signatures and/or public keys. This work aims to enhance the state of affairs for lattice-based group signatures in the standard model. We provide improved constructions that simultaneously achieves: (i) signature and public key sizes significantly smaller than those of known schemes; (ii) full anonymity in the CPA and CCA senses; (iii) security based on standard SIS and LWE assumptions with polynomial approximation factors regarding worst-case lattice problems (in general lattices). Our design approach slightly departs from that of existing pairing-based and lattice-based constructions. In the design process, we adapt and develop several lattice-based cryptographic ingredients that may be of independent interest. At the heart of our constructions is a reasonably efficient non-interactive zero-knowledge proof system for relations typically appearing in advanced privacy-preserving lattice-based cryptographic protocols. These relations are addressed by a trapdoor $\Sigma$-protocol with an inverse polynomial soundness error, which is made non-interactive via the standard-model Fiat-Shamir transform of Canetti et al. (STOC'19) and a compiler by Libert et al. (ASIACRYPT'20).
Expand
Xisen Tian, Paul Westland
ePrint Report ePrint Report
Key agreement is the cornerstone of any secure communications channel whether over the traditional internet or in delay tolerant networks used in space communications. However, space systems that rely on pre-shared keys face insurmountable limitations in both security and scalability. A single key compromise can expose all past and future encrypted communications, and the static nature of pre-shared keys prevents dynamic group membership, making it difficult to add or remove devices without invalidating entire key sets. To address these challenges, the Messaging Layer Security (MLS) protocol -- a recently introduced internet standard for asynchronous group key establishment -- offers promising capabilities. Its potential to provide efficient and dynamic key agreement for federated interplanetary architectures (e.g. government-commercial collaborations) has been previously recognized, particularly with integration of MLS with the Bundle Protocol Security (BPSec) framework. In this work, we present the first empirical findings on the integration of MLS with BPSec in a realistic space communications scenario and provide insights into its deployment in future space architectures.
Expand
Stefan Dziembowski, Grzegorz Fabiański, Daniele Micciancio, Rafał Stefański
ePrint Report ePrint Report
We present a formally-verified (in Lean 4) framework for translating symbolic cryptographic proofs into the computationally-sound ones. Symbolic cryptography is a well-established field that allows reasoning about cryptographic protocols in an abstract way and is relatively easy to verify using proof assistants. Unfortunately,  it often lacks a connection to the computational aspects of real-world cryptography. Computationally-sound cryptography, on the other hand, captures this connection much better, but it is often more complex, less accessible, and much harder to verify formally. Several works in the past have provided a bridge between the two, but, to our knowledge, none of them have been implemented in a proof assistant.

We close this gap by formalizing the translation from symbolic to computationally-sound cryptography in Lean 4. Our framework is based on the work of Micciancio (Eurocrypt, 2010) and Li and Micciancio (CSF, 2018), which builds on the idea of using co-induction (instead of induction) for reasoning about an adversary's knowledge in a symbolic setting. Our work encompasses (1) the formalization of the symbolic cryptography framework, (2) the formalization of the computationally sound cryptography framework, and (3) the formalization of the translation between the two. We also provide (4) an extended example of circuit garbling, which is a well-known cryptographic protocol frequently used in secure multi-party computation.

We believe that our work will serve as a foundation for future research in the area of formal verification of cryptographic protocols, as it enables reasoning about cryptographic protocols more abstractly while still providing a formally verified connection to the computational aspects of real-world cryptography.
Expand
Ran Gelles, Carmit Hazay, Manuj Mukherjee, Jaspal Singh, Arun Yeragudipati, Vassilis Zikas
ePrint Report ePrint Report
The study of efficient multi-party computation (MPC) has been a central focus in the cryptographic literature, producing a wide range of innovative techniques that have substantially improved the practicality of MPC in real-world applications. However, the vast majority of this work assumes reliable communication channels and neglects the impact of network-level noise—a fundamental characteristic of modern communication systems. Although classical error-correcting codes can be used to address such noise, they often introduce significant overhead, potentially offsetting the efficiency gains provided by advanced cryptographic methods. To the best of our knowledge, existing efforts to circumvent this limitation rely on alternative techniques restricted to the two-party setting, with approaches that do not naturally generalize to the multi-party case.

We initiate the study of information-theoretic MPC over noisy networks for more than two parties. We construct an MPC protocol that is secure against a semi-honest majority with an optimal communication overhead in circuit size: it computes any circuit $\mathcal{C}$ with communication complexity proportional to its size, i.e., $O(|\mathcal{C}|)$. Our protocol departs from the classical MPC methodology by introducing a novel multi-party interactive-coding that remedies channel noise through a new rewinding technique that also maintains the properties required for secure computation. The ideas of our interactive coding can be applied also to other MPC scenarios to eliminate channel noise. In fact, our treatment uncovers (and resolves) a subtle flaw in the constant-rate 2PC construction by Gelles et al. [SCN~'18] over noisy channels.
Expand
Rishabh Bhadauria, Alexander R. Block, Prantar Ghosh, Justin Thaler
ePrint Report ePrint Report
We initiate the study of lower bounding the verification time of Succinct Non-interactive ARguments of Knowledge (SNARKs) built in the Polynomial Interactive Oracle Proof + Polynomial Commitment Scheme paradigm. The verification time of these SNARKs is generally dominated by the polynomial commitment scheme, and so we want to understand if polynomial commitment schemes admit lower bounds on the verification time. By recognizing that polynomial commitment schemes are also often built by applying cryptography to some information-theoretic core protocol, we seek to separate this core from the cryptography in a way that meaningfully captures the verification time required by the polynomial commitment scheme verifier.

We provide strong evidence that several polynomial commitment schemes have (nearly) optimal verifier times. Our evidence comes from connecting polynomial commitment schemes to certain information-theoretic protocols known as communication protocols from the field of communication complexity, a link which we believe to be of independent interest. Through this lens, we model the verifier work in the cryptographic protocols as information (i.e., number of bits) exchanged between parties in the communication protocols, allowing us to leverage lower bounds from communication complexity. These lower bounds give strong evidence that the verifier time in these polynomial commitment schemes must be at least the number of bits exchanged in the communication protocol.

We extract the communication protocol cores of three polynomial commitment schemes and lower bound the bits exchanged in these cores. The lower bounds we obtain match (up to poly-logarithmic factors) the best-known (asymptotic) verification times of the polynomial commitment schemes we examine in this work. Specifically, we show that for univariate/multilinear polynomials of size $N=2^n$: - the communication core of Hyrax PCS (Wahby et al., S&P 2016) requires $\Omega(\sqrt{N})$ bits to be exchanged; - the communication core of Bulletproofs PCS (Bootle et al., EUROCRYPT 2016; Bünz et al., S&P 2018) requires $\Omega(N)$ bits to be exchanged; and - the communication core of Dory PCS (Lee, TCC 2021) requires $\Omega(\log(N))$ bits to be exchanged. Our results strongly suggest a negative answer to a longstanding open question on whether the Bulletproofs verifier can be made sublinear time.
Expand

18 September 2025

Zoushaojie Jiang, An Wang, Yaoling Ding, Annv Liu, Zheng Liu, Jing Yu, Liehuang Zhu
ePrint Report ePrint Report
Deep learning-based profiled side-channel analysis (SCA) targeting cryptographic implementations has attracted significant attention in recent years. Generalizable deep learning mechanisms, such as contrastive learning-based profiled SCA (CL-SCA), can enhance the effectiveness of SCA without reliance on specific network architectures and hyperparameters. This independence enables robust adaptation across diverse attack scenarios. Nonetheless, CL-SCA relies heavily on data augmentation and may mistakenly push apart physical leakage traces that should belong to the same class, which interferes with the extraction of discriminative features crucial for SCA performance. To address these limitations, we propose a profiled SCA method based on supervised contrastive learning, called SupCL-SCA. This method enhances the learning of discriminative features that facilitate key recovery by leveraging supervised information to guide the extraction of similarities in feature space. Compared with state-of-the-art methods, SupCL-SCA not only retains their general applicability and inherent advantages but also eliminates reliance on complex data augmentation and multi-stage training. Additionally, we propose a cosine distance-based Intra-Inter Distance Ratio (IIDR) metric to assess the discriminative capability of models in deep learning-based profiled SCA methods. We evaluate SupCL-SCA on three publicly available datasets covering different implementations and platforms. Experimental results show that SupCL-SCA consistently reduces the number of traces required to recover the key compared to the original methods, demonstrating enhanced attack capability.
Expand
Yingjie Lyu, Zengpeng Li, Hong-Sheng Zhou, Xudong Deng
ePrint Report ePrint Report
We propose the first two-round multi-party signing protocol for the Elliptic Curve Digital Signature Algorithm (ECDSA) in the threshold-optimal setting, reducing the number of rounds by one compared to the state of the art (Doerner et al., S&P '24). We also resolve the security issue of presigning pointed out by Groth and Shoup (Eurocrypt '22), evading a security loss that increases with the number of pre-released, unused presignatures, for the first time among threshold-optimal schemes.

Our construction builds on Non-Interactive Multiplication (NIM), a notion proposed by Boyle et al. (PKC '25), which allows parties to evaluate multiplications on secret-shared values in one round. In particular, we use the construction of Abram et al. (Eurocrypt '24) instantiated with class groups. The setup is minimal and transparent, consisting of only two class-group generators. The signing protocol is efficient in bandwidth, with a message size of 1.9 KiB at 128-bit security, and has competitive computational performance.
Expand
Shengnan Zhao, Junyu Lu, Yuchen Huang, Dongdong Miao, Chuan Zhao
ePrint Report ePrint Report
Private information retrieval (PIR) enables a client to fetch a record from databases held by untrusted servers while hiding the access pattern (index or keyword) from the servers. In practical settings, however, data objects (e.g., articles, videos) are commonly tagged with multiple identifiers, which can be structured as {index, value, keywords}. Current PIR schemes are constrained to retrieving records based on a single index or a single keyword, and cannot efficiently handle conjunctive queries requiring multiple keywords. To address this limitation, we propose Mk-PIR, a PIR scheme that enables a client to retrieve records that match all specified keywords simultaneously. We propose two distinct constructions: $\textsf{MkPIR}^\mathbf{I}$, an inverted-index-based solution built upon our Oblivious Set Intersection (OSI) primitive, which enables private intersection computation on the server side without revealing client queries; and $\textsf{MkPIR}^\mathbf{F}$, a forward-index-based solution utilizing our Private Subset Determination (PSD), which privately outputs matching indices by verifying subset relationships. Two constructions adapt to diverse database configurations where keywords are not required to be the primary key. Experimental results show that the average time to determine whether an index satisfies multiple keywords ranges from 0.5 to 332 ms, demonstrating that Mk-PIR achieves flexible query capabilities with modest performance overhead.
Expand
Léo Ducas, Johanna Loyer
ePrint Report ePrint Report
Most concrete analyses of lattice reduction focus on the BKZ algorithm or its variants relying on Shortest Vector Problem (SVP) oracles. However, a variant by Li and Nguyen (Cambridge U. Press 2014) exploits more powerful oracles, namely for the Densest rank-$k$ Sublattice Problem ($DSP_k$) for $k \geq 2$.

We first observe that, for random lattices, $DSP_2$ --and possibly even $DSP_3$-- seems heuristically not much more expensive than solving SVP with the current best algorithm. We indeed argue that a densest sublattice can be found among pairs or triples of vectors produced by lattice sieving, at a negligible additional cost. This gives hope that this approach could be competitive.

We therefore proceed to a heuristic and average-case analysis of the slope of $DSP_k$-BKZ output, inspired by a theorem of Kim (Journal of Number Theory 2022) which suggests a prediction for the volume of the densest rank-$k$ sublattice of a random lattice.

Under this heuristic, the slope for $k=2$ or $3$ appears tenuously better than that of BKZ, making this approach less effective than regular BKZ using the "Dimensions for Free'' of Ducas (EUROCRYPT 2018). Furthermore, our experiments show that this heuristic is overly optimistic.

Despite the hope raised by our first observation, we therefore conclude that this approach appears to be a No-Go for cryptanalysis of generic lattice problems.
Expand
Dimitri Koshelev
ePrint Report ePrint Report
This short note is devoted to a significant enhancement of [8] by resolving satisfactorily the problem formulated at the end of that article. More precisely, a new laconic, secure, and efficient (de)compression method is provided for points of any elliptic curve over any highly $2$-adic finite field of large characteristic. Such fields are ubiquitous in modern elliptic curve cryptography, whereas they severely slow down the conventional $x$-coordinate (de)compression technique. In comparison with the main method from the cited work, the new one requires neither complicated mathematical formulas nor conditions on the curve. Thereby, the current work is universal and much more implementation-friendly, which justifies its existence, despite the absence of interesting mathematics behind it.

8. Koshelev, D.: Point (de)compression for elliptic curves over highly $2$-adic finite fields. Advances in Mathematics of Communications 19(5), 1539–1559 (2025), https://doi.org/10.3934/amc.2025008
Expand
Dilara Toprakhisar, Svetla Nikova, Ventzislav Nikov
ePrint Report ePrint Report
Physical attacks pose serious challenges to the secure implementation of cryptographic algorithms. While side-channel analysis (SCA) has received significant attention, leading to well-established countermeasures, fault attacks and especially their combination with SCA (i.e., combined attacks) remain less researched. Addressing such combined attacks often requires a careful integration of masking and redundancy techniques to resist the reciprocal effects of faults and probes. Recent research on combined security has gained momentum, with most approaches relying on composable security notions involving error correction, typically applied after each nonlinear operation. While effective, this approach introduces an area and performance overhead, along with additional security challenges posed by the correction circuits themselves.

In this work, we take a different direction, following the concept of stability introduced in StaTI (CHES 2024), which ensures fault propagation to protect against ineffective faults. We extend this concept to combined security by proposing a new composable security notion, combined stability, which integrates an extended stability notion, diffused stability, with arbitrarily composable glitch-extended probing security notions. Notably, this framework requires only a single error detection at the end of the computation, avoiding costly intermediate error checks and corrections. To demonstrate practicality, we describe a combined secure AES S-box hardware implementation. Our results show that this approach, achieving combined security with competitive implementation costs, offers a promising alternative to error-correction-based schemes.
Expand
Valerio Cini, Russell W. F. Lai, Ivy K. Y. Woo
ePrint Report ePrint Report
Threshold public-key encryption (tPKE) enables any subset of $t$ out of $K$ parties to decrypt non-interactively, while any ciphertext remain secure if less that $t$ decryption shares are known. Despite recent progress, existing lattice-based tPKEs face at least one of the following drawbacks: (1) having large decryption share size -- polynomial in $K$ and some even exponential in $t$, (2) proven secure only against relaxed security models where the adversary is not allowed to see decryption shares of challenge ciphertexts, and (3) lack of concrete efficiency, in particular due to the requirement of super-polynomial modulus for noise flooding.

We present $\mathsf{Pilvi}$, a new thresholdised variant of Regev’s public-key encryption scheme, which achieves both small decryption shares and a strong form of simulation-based security under the Learning with Errors (LWE) assumption. Our construction has decryption share size $t \cdot \log K \cdot \mathsf{poly}(\lambda)$ and allows the use of a polynomial-size modulus assuming an a priori bound on the number of queries $Q$. It remains secure even when an adaptive adversary requests partial decryptions of both challenge and non-challenge ciphertexts, as long as for each ciphertext the number of corrupt parties plus the number of shares obtained is less than $t$. We provide concrete parameter suggestions for 128-bit security for a wide range of $(t,K,Q)$, including cases where $t \approx K/2$ for up to $K \leq 32$ users and $Q \leq 2^{60}$ partial decryption queries. The ciphertext size ranges from $14$ to $58$ KB and the partial decryption share size ranges from $1$ to $4$ KB.

Along the way, we abstract out a general purpose tool called the threshold-LWE assumption, which we prove to follow from LWE. The threshold-LWE assumption captures the core steps in security proofs of schemes involving Shamir's secret-sharing the LWE secret with carefully chosen evaluation points, the algebraic structures from the latter being what enabling the efficiency of our tPKE scheme. As an additional application, we also show how to construct distributed pseudorandom functions (dPRFs) from the threshold-LWE assumption.
Expand
Xavier Bonnetain, Johanna Loyer, André Schrottenloher, Yixin Shen
ePrint Report ePrint Report
Searching for collisions in random functions is a fundamental computational problem, with many applications in symmetric and asymmetric cryptanalysis. When one searches for a single collision, the known quantum algorithms match the query lower bound. This is not the case for the problem of finding multiple collisions, despite its regular appearance as a sub-component in sieving-type algorithms.

At EUROCRYPT 2019, Liu and Zhandry gave a query lower bound $\Omega{}(2^{m/3 + 2k/3})$ for finding $2^k$ collisions in a random function with $m$-bit output. At EUROCRYPT 2023, Bonnetain et al. gave a quantum algorithm matching this bound for a large range of $m$ and $k$, but not all admissible values. Like many previous collision-finding algorithms, theirs is based on the MNRS quantum walk framework, but it chains the walks by reusing the state after outputting a collision.

In this paper, we give a new algorithm that tackles the remaining non-optimal range, closing the problem. Our algorithm is tight (up to a polynomial factor) in queries, and also in time under a quantum RAM assumption. The idea is to extend the chained walk to a regime in which several collisions are returned at each step, and the "walks" themselves only perform a single diffusion layer.
Expand
Frank Denis
ePrint Report ePrint Report
This paper introduces efficient, practical methods for encrypting IPv4/IPv6 addresses while preserving utility in logs, telemetry, and third-party data exchange. We focus on three practical goals: (i) format-compatible encryption that keeps outputs in the IPv6 address space and handles IPv4 inputs canonically; (ii) prefix-preserving encryption that retains network structure for analytics while hiding host identity; and (iii) non-deterministic encryption that resists correlation while remaining compact and invertible. We give deterministic, prefix-preserving, and two non-deterministic variants, with security models and arguments under standard assumptions, plus explicit usage bounds and operating limits. We also relate each variant to known efficiency lower bounds (ciphertext expansion and primitive calls) and state our claims within deployable parameter ranges.
Expand
Yuange Li, Xiong Fan
ePrint Report ePrint Report
Zero-knowledge proofs of training (zkPoT) enable a prover to certify that a model was trained on a committed dataset under a prescribed algorithm without revealing the model or data. Proving recurrent neural network (RNN) training is challenging due to hidden-state recurrence and cross-step weight sharing, which require proofs to enforce recurrence, gradients, and nonlinear activations across time.

We present SUMMER (SUMcheck and MERkle tree), a recursive zkPoT for scalable RNNs. SUMMER generates sumcheck-based proofs that backpropagation through time (BPTT) was computed correctly over a quantized finite field, while nonlinearities such as $\tanh$ and softmax are validated by lookup arguments. Per-step commitments and proofs are folded with Merkle trees, yielding a final commitment and a succinct proof whose size and verification time are independent of the number of iterations.

SUMMER offers (i) the first end-to-end zkPoT for RNN training, including forward, backward, and parameter updates; (ii) the first use of LogUp for nonlinear operations with a batched interface; and (iii) efficient recursive composition of lookup and sumcheck proofs. On a Mini-Char-RNN with 12M parameters, the prover runs in 70.1 seconds per iteration, $8.5\times$ faster and $11.6\times$ more memory efficient than the IVC baseline, with 165 kilobyte proofs verified in 20 milliseconds.
Expand
Easwar Vivek Mangipudi, Panagiotis Chatzigiannis, Konstantinos Chalkias, Aniket Kate, Mohsen Minaei, Mainack Mondal
ePrint Report ePrint Report
In a Web3 (blockchain) setting, account recovery allows users to regain access to their accounts after losing their authentication credentials. Although recovery mechanisms are well-established and extensively analyzed in the context of Web2 systems, Web3 presents distinct challenges. Web3 account access is typically tied to cryptographic key pairs, and private keys are not entrusted to centralized entities. This design improves security, but significantly complicates the recovery process, making it difficult or even impossible for users to regain access after loss of keys. Given the critical role that recovery plays in ensuring long-term feasibility and trust in digital systems, a range of recovery mechanisms has been proposed to accommodate the unique properties of Web3. These mechanisms aim to help users manage key loss without introducing undue friction or risk.

Although there has been an exponential increase in the use of cryptocurrency wallets in the last decade, the popularity and usage of the corresponding recovery mechanisms remain unclear. Furthermore, it is still unclear how users perceive these recovery mechanisms and what they expect from them. In this work, our objective is to empirically understand and analyze user perceptions of the various recovery mechanisms. To this end, we conducted a user survey of 331 participants and asked them to rate different mechanisms on usability, security, and availability. The results show interesting aspects of the user preferences, including their view of sharing keys among different devices and trusting their friends or family. Based on our findings, we provide insight and future directions for the developer and research community.
Expand
Ole Martin Edstrøm, Kristian Gjøsteen, Hans Heum, Sjouke Mauw, Felix Stutz
ePrint Report ePrint Report
Electronic identification (eID) protocols and federated identity management systems play an increasingly important role in our modern society, both on the internet through services from Google and others, and through the eIDAS regulation in Europe. A key feature of eID protocols is that humans are intimately involved in the protocol, often responsible for critical security steps. Traditional security analyses of such protocols typically assume flawless user behaviour, yet widespread real-world adoption makes user mistakes inevitable.

We present a framework for analysing the security of eID protocols that can model users making mistakes. It is suitable for automated analysis with Tamarin and supports fine-grained corruption modelling of protocol actors. We demonstrate the framework's utility by describing and analysing common eID protocols based on passwords, mobile applications and authentication tokens, as well as by systematically evaluating the impact of various combinations of user mistakes on security.
Expand
Lucien K. L. Ng, Vladimir Kolesnikov
ePrint Report ePrint Report
Garbled Circuits (GC) are a foundational primitive for secure two-party computation (2PC). Garbled Private Information Retrieval (GPIR) is a GC technique for looking up a public array or database (DB) on a private index unknown to either party. GPIR immediately enables GC evaluation of functions implemented as a publicly known lookup table (LUT).

However, GPIR is costly. It can be realized by a linear scan, by adapting Garbled RAM, by stacking GC branches implementing access to table elements, and, most recently, via the GC lookup table scheme logrow (Heath et al., Eurocrypt 2024). For an $N$-row DB lookup of $m$-bit rows, logrow has computation cost $\approx O(N m \kappa)$ and communication cost $O(m(\log N \cdot \kappa + N))$. This makes logrow practical only for tables up to about $2^{15}$ rows.

We propose Toss, a new efficient GPIR protocol with dramatically reduced bandwidth consumption—an especially scarce resource in MPC—both asymptotically and concretely. Our communication cost is $O\!\left(\sqrt{N}\, m \sqrt{\kappa}\right)$ with a small constant, which is sublinear in both $N$ and the security parameter $\kappa$. Our computation cost is $O\!\left(N m \kappa + \bigl(\sqrt{N/\kappa}\, m + N\bigr) c_\kappa \right)$, where $c_\kappa$ is the cost of a hash evaluation. This computation cost is comparable to, or slightly lower than, that of logrow.

In concrete terms, for a $2^{20}$-row LUT of 8-bit items, we achieve more than a $31\times$ reduction in communication compared to logrow. On a laptop over a 100 Mbps channel, throughput increases from approximately $10.6$ lookups/s to $81$ lookups/s, a $>7.5\times$ improvement. On a 10 Mbps channel, Toss achieves more than $28\times$ better throughput. The improvement grows with $N$; for example, for $N=2^{25}$ and $m=32$, the gain exceeds $512\times$.

Toss builds on stacked garbling (SGC) and logrow, incorporating multiple low-level optimizations and requiring a reworking of their internals and interfaces. We emphasize that constructing GPIR directly from SGC incurs logarithmic computational overhead, which decreases throughput in typical “laptop + LAN” testbeds. Our design avoids this pitfall. We implement Toss and report on its performance, demonstrating its substantial communication savings and practical efficiency.
Expand
Next ►