International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

29 November 2024

Zhuo Wu, Shi Qi, Xinxuan Zhang, Yi Deng
ePrint Report ePrint Report
This work introduces a novel technique to enhance the efficiency of proving composite statements. We present the \textit{Hash-and-Prove} framework to construct zkSNARKs for proving satisfiability of arithmetic circuits with additional \textit{Algebraic Gate}. These algebraic gates serve as building blocks for forming more generalized relations in algebra. Unlike Pedersen-committed \textit{Commit-and-Prove} SNARKs, which suffer from increased proof size and verification overhead when proving composite statements, our solution significantly improves both proof size and verification time while maintaining competitive and practical prover efficiency.

In the application of proof of solvency where we need to prove knowledge of $x$ such that SHA$256(g^x)=y$, our approach achieves a 100$\times$ reduction in proof size and a 500$\times$ reduction in verification time, along with a 2$\times$ speedup in proving time compared to the work of Agrawal et al.(CRYPTO 2018). For proving ECDSA signatures verification, we achieve a proof time of 2.1 seconds, which is a 70$\times$ speedup compared to using Groth16, and a proof size of 4.81 kb, which is a 160$\times$ reduction compared to Field Agnostic SNARKs(Block et al., CRYPTO 2024).
Expand
Yipeng Shi
ePrint Report ePrint Report
White-box cryptography is a software implementation technique based on lookup tables, with effective resistance against key extraction and code lifting attacks being a primary focus of its research. Space hardness is a widely used property for evaluating the resistance of white-box ciphers against code lifting attacks. However, none of the existing ciphers can provide strong space hardness under adaptively chosen-space attack model. We propose a new scheme based on the lookup table pool and key guidance implementation as a more efficient approach to utilizing lookup tables to provide better security and practicality. Specifically, we introduce a new white-box cipher, RubikStone, which offers a range of variants from tens of kilobytes to infinite size. For the first time, we prove that all variants of RubikStone can provide strong space hardness under an adaptively chosen-space attack model. Additionally, we present a specific key guidance application for cloud-based DRM scenarios. Based on our proposed RubikStone variants, the key guidance applications can achieve at least overall $(0.950T, 128)$-space hardness. Furthermore, we introduce a novel property, table consumption rate, for evaluating the durability of a specific white-box cryptographic implementation. In our evaluation, all the instantiations of RubikStone exhibit the lowest table consumption rate in algorithms with equally sized lookup tables. Besides, we conduct a comprehensive statistical analysis of the operations in all existing white-box ciphers. Our findings indicate that RubikStone remains highly competitive in terms of computational efficiency despite offering unprecedented levels of security.
Expand
Shaoqiang Wu, Ding Wang
ePrint Report ePrint Report
The password-hardening service (PH) is a crypto service that armors canonical password authentication with an external key against offline password guessing in case the password file is somehow compromised/leaked. The game-based formal treatment of PH was brought by Everspaugh et al. at USENIX Security'15. Their work is followed by efficiency-enhancing PO-COM (CCS'16), security-patching Phoenix (USENIX Security'17), and functionality-refining PW-Hero (SRDS'22). However, the issue of single points of failure (SPF) inherently impairs the availability of these PH schemes. More specifically, the failure of a single PH server responsible for crypto computation services will suspend password authentication for all users.

We propose the notion of reliable PH, which improves the availability of PH by eliminating SPF. We present a modular PH construction, TF-PH, essentially a generic compiler that can transform any PH protocol into a reliable one without SPF via introducing threshold failover. Particularly, we propose a concrete reliable PH protocol, called TF-RePhoenix, a simple and efficient construction with RePhoenix (which improves over Phoenix at USENIX Security'17) as the PH module. Security is proven within the universally composable (UC) security framework and the random oracle model (ROM), where we, for the first time, formalize the ideal UC functionalities of PH and reliable PH. We comparatively evaluate the efficiency of our TF-PH with the canonical threshold method (taken as an example, the threshold solution introduced by Brost et al. at CCS'20 in a PH-derived domain -- password-hardened encryption). Results show that our threshold failover-based solution to SPF provides optimal performance and achieves failover in a millisecond.
Expand

25 November 2024

Mia Filić, Keran Kocher, Ella Kummer, Anupama Unnikrishnan
ePrint Report ePrint Report
Probabilistic data structures (PDS) are compact representations of high-volume data that provide approximate answers to queries about the data. They are commonplace in today's computing systems, finding use in databases, networking and more. While PDS are designed to perform well under benign inputs, they are frequently used in applications where inputs may be adversarially chosen. This may lead to a violation of their expected behaviour, for example an increase in false positive rate.

In this work, we focus on PDS that handle approximate membership queries (AMQ). We consider adversarial users with the capability of making adaptive insertions, deletions and membership queries to AMQ-PDS, and analyse the performance of AMQ-PDS under such adversarial inputs.

We argue that deletions significantly empower adversaries, presenting a challenge to enforcing honest behaviour when compared to insertion-only AMQ-PDS. To address this, we introduce a new concept of an honest setting for AMQ-PDS with deletions. By leveraging simulation-based security definitions, we then quantify how much harm can be caused by adversarial users to the functionality of AMQ-PDS. Our resulting bounds only require calculating the maximal false positive probability and insertion failure probability achievable in our novel honest setting.

We apply our results to Cuckoo filters and Counting filters. We show how to protect these AMQ-PDS at low cost, by replacing or composing the hash functions with keyed pseudorandom functions in their construction. This strategy involves establishing practical bounds for the probabilities mentioned above. Using our new techniques, we demonstrate that achieving security against adversarial users making both insertions and deletions remains practical.
Expand
Gal Cohen, Itamar Levy
ePrint Report ePrint Report
Interconnected devices enhance daily life but introduce security vulnerabilities, new technologies enable malicious activities such as information theft. This article combines radio frequency (RF) side-channel attacks with software Trojans to create a hard-to-detect, stealthy method for extracting kilobytes of secret information per millisecond over record distances with a single measurement in the RF spectrum. The technique exploits Trojan-induced electrical disturbances in RF components originating from peripherals, buses, memories and CPUs to achieve high SNR data leakage schemes. Experimental results show negligible acquisition time and stealth. The research introduces optimized modulation, demodulation schemes, and specialized synchronization symbols to minimize error rates and maximize data rates. It highlights the need for advanced detection and defense mechanisms to ensure the security and privacy of interconnected devices.
Expand
Pengfei Lu, Hongyuan Qu
ePrint Report ePrint Report
Private information retrieval (PIR) is a key component of many privacy-preserving systems. Although numerous PIR protocols have been proposed, designing a PIR scheme with communication overhead independent of the database size $N$ and computational cost practical for real-world applications remains a challenge. In this paper, we propose the NewtonPIR protocol, a communication efficient single-server PIR scheme. NewtonPIR can directly generate query values for the entire index without splitting the index and sending multiple query ciphertexts. Specifically, NewtonPIR achieves communication overhead that is 7.5$\times$ better than the state-of-the-art PIR protocol and 35.9$\sim$75$\times$ better than the other protocols. In experiments, when the database size and entry size increase, the communication overhead of NewtonPIR remains stable. By utilizing the single-ciphertext fully homomorphic encryption (FHE) scheme and the simple Newton interpolation polynomial, along with precomputing coefficients in the offline phase, we reduce the computational overhead of NewtonPIR from hours in previous schemes to seconds. To the best of our knowledge, NewtonPIR is the first protocol to achieve communication cost independent of $N$ along with computational overhead comparable to ring learning with errors (RLWE)-based PIR schemes. Additionally, we extend and introduce a private set intersection (PSI) protocol that balances computational and communication overhead more effectively.
Expand
Ling Song, Qinggan Fu, Qianqian Yang, Yin Lv, Lei Hu
ePrint Report ePrint Report
Impossible differential cryptanalysis is a crucial cryptanalytical method for symmetric ciphers. Given an impossible differential, the key recovery attack typically proceeds in two steps: generating pairs of data and then identifying wrong keys using the guess-and-filtering method. At CRYPTO 2023, Boura \etal first proposed a new key recovery technique - the differential meet-in-the-middle attack, which recovers the key in a meet-in-the-middle manner. Inspired by this technique, we incorporate the meet-in-the-middle technique into impossible cryptanalysis and propose a generic impossible differential meet-in-the-middle attack (\idma) framework. We apply \idma to block ciphers \skinny, \skinnye-v2, and \forkskinny and achieve remarkably efficient attacks. We improve the impossible differential attack on \skinny-$n$-$3n$ by 2 rounds in the single-tweakey setting and 1 round in the related-tweakey setting. For \skinnye-v2, the impossible differential attacks now can cover 2 more rounds in the related-tweakey setting and the first 23/24/25-round attacks in the single-tweakey model are given. For \forkskinny-$n$-$3n$, we improve the attacks by 2 rounds in the limited setting specified by the designers and 1 round in relaxed settings. These results confirm that the meet-in-the-middle technique can result in more efficient key recovery, reaching beyond what traditional methods can achieve on certain ciphers.
Expand
Ruiyang Li, Chun Guo, Xiao Wang
ePrint Report ePrint Report
State-of-the-art garbling schemes for boolean circuits roughly consist of two families, i.e., ideal model garbling that combines linear operations and ideal blockciphers (aiming at maximizing performance), and PRF-based garbling that insists on using theoretically sound assumptions. In the linear garbling framework introduced by Zahur, Rosulek, and Evans (Eurocrypt 2015), it was established that garbling an AND gate requires at least $2(\kappa +1)$ bits of ciphertext, with $\kappa$ as the security parameter. Recent contributions from Lei Fan et al. and Chunghun Baek et al. have provided a detailed model showing that, under the free-XOR setting, which relies on a non-standard assumption, garbling an AND gate requires at least $1.5\kappa + O(1)$ bits. In contrast, regarding PRF-based garbling, the general model and efficiency bounds remain open questions.

In this paper, we present a comprehensive model for PRF-based garbled circuits and establish both the communication and computation lower bound. Specifically, we demonstrate that garbling an AND gate requires at least $2\kappa + 2$ bits communication and 6 PRF calls, while an XOR gate requires a minimum of $\kappa$ bits communication and 4 PRF calls. Notably, the state-of-the-art garbling scheme (GLNP scheme) under the PRF assumption, introduced by Shay, Yehuda, Ariel, and Benny (JOC 2018), uses $2\kappa + 4$ bits and 8 PRF calls for an AND gate, which exceeds our established lower bound. We finally introduce an optimal garbling scheme showing that our communication and computation lower bounds are tight.
Expand
Guangwu Xu, Ke Han, Yunxiao Tian
ePrint Report ePrint Report
The family of Koblitz curves $E_b: y^2=x^3+b/\mathbb{F}_p$ over primes fields has close connections to the ring $\mathbb{Z}[\omega]$ of Eisenstein integers. Utilizing nice facts from the theory of cubic residues, this paper derives an efficient formula for a (complex) scalar multiplication by $\tau=1-\omega$. This enables us to develop a window $\tau$-NAF method for Koblitz curves over prime fields. This probably is the first window $\tau$-NAF method to be designed for curves over fields with large characteristic. Besides its theoretical interest, a higher performance is also achieved due to the facts that (1) the operation $\tau^2$ can be done more efficiently that makes the average cost of $\tau$ to be close to $2.5\mathbf{S}+3\mathbf{M}$ ( $\mathbf{S}$ and $\mathbf{M}$ denote the costs for field squaring and multiplication, respectively); (2) the pre-computation for the window $\tau$-NAF method is surprisingly simple in that only one-third of the coefficients need to be processed. The overall improvement over the best current method is more than $11\%$. The paper also suggests a simplified modular reduction for Eisenstein integers where the division operations are eliminated. The efficient formula of $\tau P$ can be further used to speed up the computation of $3P$, compared to $10\mathbf{S}+5\mathbf{M}$ , our new formula just costs $4\mathbf{S}+6\mathbf{M}$. As a main ingredient for double base chain method for scalar multiplication, the $3P$ formula will contribute to a greater efficiency.
Expand
Kimia Tasnia, Sazadur Rahman
ePrint Report ePrint Report
Despite the emergence of Large Language Models (LLMs) as potential tools for automating hardware design, the optimal programming language to describe hardware functions remains unknown. Prior works extensively explored optimizing Verilog-based HDL design, which often overlooked the potential capabilities of alternative programming languages for hardware designs. This paper investigates the efficacy of C++ and Verilog as input languages in extensive application space exploration, tasking an LLM to generate implementations for various System-on-chip functional blocks. We proposed an automated Optimal Programming Language (OPL) framework that leverages OpenAI's GPT-4o LLM to translate natural language specifications into hardware descriptions using both high-level and low-level programming paradigms. The OPL4GPT demonstration initially employs a novel prompt engineering approach that decomposes design specifications into manageable submodules, presented to the LLM to generate code in both C++ and Verilog. A closed-loop feedback mechanism automatically incorporates error logs from the LLM's outputs, encompassing both syntax and functionality. Finally, functionally correct outputs are synthesized using either RTL (Register-Transfer Level) for Verilog or High-Level Synthesis for C++ to assess area, power, and performance. Our findings illuminate the strengths and weaknesses of each language across various application domains, empowering hardware designers to select the most effective approach.
Expand
Aydin Aysu, Fatemeh Ganji, Trey Marcantonio, Patrick Schaumont
ePrint Report ePrint Report
Implementation-security vulnerabilities such as the power-based side-channel leakage and fault-injection sensitivity of a secure chip are hard to verify because of the sophistication of the measurement setup, as well as the need to generalize the adversary into a test procedure. While the literature has proposed a wide range of vulnerability metrics to test the correctness of a secure implementation, it is still up to the subject-matter expert to map these concepts into a working and reliable test procedure. Recently, we investigated the benefits of using an open-source implementation security testing environment called Chipwhisperer. The open-source and low-cost nature of the Chipwhisperer hardware and software has resulted in the adoption of thousands of testing kits throughout academia and industry, turning the testkit into a baseline for implementation security testing. We investigate the use cases for the Chipwhisperer hardware and software, and we evaluate the feasibility of an open-source ecosystem for implementation security testing. In addition to the open-source hardware and firmware, an ecosystem also considers broader community benefits such as re-usability, sustainability, and governance.
Expand
Mahdi Mahdavi, Navid Abapour, Zahra Ahmadian
ePrint Report ePrint Report
With the increasing integration of crowd computing, new vulnerabilities emerge in widely used cryptographic systems like the RSA cryptosystem, whose security is based on the factoring problem. It is strongly advised to avoid using the same modulus to produce two pairs of public-private keys, as the cryptosystem would be rendered vulnerable to common modulus attacks. Such attacks can take two forms: one that aims to factorize the common modulus based on one key pair and the other that aims to decrypt certain ciphertexts generated by two public keys if the keys are co-prime. This paper introduces a new type of common modulus attack on the RSA cryptosystem. In our proposed attack, given one public-private key pair, an attacker can obtain the private key corresponding to a given public key in RSA decryption. This allows the adversary to decrypt any ciphertext generated using this public key. It is worth noting that the proposed attack can be used in the CRT model of RSA. In addition, we propose a parallelizable factoring algorithm with an order equivalent to a cyclic attack in the worst-case scenario.
Expand
Nicolas Huber, Ralf Kuesters, Julian Liedtke, Daniel Rausch
ePrint Report ePrint Report
Electronic voting (e-voting) systems have become more prevalent in recent years, but security concerns have also increased, especially regarding the privacy and verifiability of votes. As an essential ingredient for constructing secure e-voting systems, designers often employ zero-knowledge proofs (ZKPs), allowing voters to prove their votes are valid without revealing them. Invalid votes can then be discarded to protect verifiability without compromising the privacy of valid votes.

General purpose zero-knowledge proofs (GPZKPs) such as ZK-SNARKs can be used to prove arbitrary statements, including ballot validity. While a specialized ZKP that is constructed only for a specific election type/voting method, ballot format, and encryption/commitment scheme can be more efficient than a GPZKP, the flexibility offered by GPZKPs would allow for quickly constructing e-voting systems for new voting methods and new ballot formats. So far, however, the viability of GPZKPs for showing ballot validity for various ballot formats, in particular, whether and in how far they are practical for voters to compute, has only recently been investigated for ballots that are computed as Pedersen vector commitments in an ACM CCS 2022 paper by Huber et al.

Here, we continue this line of research by performing a feasibility study of GPZKPs for the more common case of ballots encrypted via Exponential ElGamal encryption. Specifically, building on the work by Huber et al., we describe how the Groth16 ZK-SNARK can be instantiated to show ballot validity for arbitrary election types and ballot formats encrypted via Exponential ElGamal. As our main contribution, we implement, benchmark, and compare several such instances for a wide range of voting methods and ballot formats. Our benchmarks not only establish a basis for protocol designers to make an educated choice for or against such a GPZKP, but also show that GPZKPs are actually viable for showing ballot validity in voting systems using Exponential ElGamal.
Expand
Jelle Vos, Jorrit van Assen, Tjitske Koster, Evangelia Anna Markatou, Zekeriya Erkin
ePrint Report ePrint Report
Private set intersections are cryptographic protocols that compute the intersection of multiple parties' private sets without revealing elements that are not in the intersection. These protocols become less efficient when the number of parties grows, or the size of the sets increases. For this reason, many protocols are based on Bloom filters, which speed up the protocol by approximating the intersections, introducing false positives with a small but non-negligible probability. These false positives are caused by hash collisions in the hash functions that parties use to encode their sets as Bloom filters. In this work, we show that these false positives are more than an inaccuracy: an adversary in the augmented semi-honest model can use them to learn information about elements that are not in the intersection. First, we show that existing security proofs for Bloom filter-based private set intersections are flawed. Second, we show that even in the most optimistic setting, Bloom filter-based private set intersections cannot securely realize an approximate private set intersection unless the parameters are so large that false positives only occur with negligible probability. Third, we propose a practical attack that allows a party to learn if an element is contained in a victim's private set, showing that the problem with Bloom filters is not just theoretical. We conclude that the efficiency gain of using Bloom filters as an approximation in existing protocols vanishes when accounting for this security problem. We propose three mitigations besides choosing larger parameters: One can use oblivious pseudo-random functions instead of hash functions to reduce the success rate of our attack significantly, or replace them with password-based key derivation functions to significantly slow down attackers. A third option is to let a third party authorize the input sets before proceeding with the protocol.
Expand
Fukang Liu, Katharina Koschatko, Lorenzo Grassi, Hailun Yan, Shiyao Chen, Subhadeep Banik, Willi Meier
ePrint Report ePrint Report
A new design strategy for ZK-friendly hash functions has emerged since the proposal of $\mathsf{Reinforced Concrete}$ at CCS 2022, which is based on the hybrid use of two types of nonlinear transforms: the composition of some small-scale lookup tables (e.g., 7-bit or 8-bit permutations) and simple power maps over $\mathbb{F}_p$. Following such a design strategy, some new ZK-friendly hash functions have been recently proposed, e.g., $\mathsf{Tip5}$, $\mathsf{Tip4}$, $\mathsf{Tip4}'$ and the $\mathsf{Monolith}$ family. All these hash functions have a small number of rounds, i.e., $5$ rounds for $\mathsf{Tip5}$, $\mathsf{Tip4}$, and $\mathsf{Tip4}'$, and $6$ rounds for $\mathsf{Monolith}$ (recently published at ToSC 2024/3). Using the composition of some small-scale lookup tables to build a large-scale permutation over $\mathbb{F}_p$ - which we call S-box - is a main feature in such designs, which can somehow enhance the resistance against the Gröbner basis attack because this large-scale permutation will correspond to a complex and high-degree polynomial representation over $\mathbb{F}_p$. As the first technical contribution, we propose a novel and efficient algorithm to study the differential property of this S-box and to find a conforming input pair for a randomly given input and output difference. For comparison, a trivial method based on the use of the differential distribution table (DDT) for solving this problem will require time complexity $\mathcal{O}(p^2)$. For the second contribution, we also propose new frameworks to devise efficient collision attacks on such hash functions. Based on the differential properties of these S-boxes and the new attack frameworks, we propose the first collision attacks on $3$-round $\mathsf{Tip5}$, $\mathsf{Tip4}$, and $\mathsf{Tip4}'$, as well as $2$-round $\mathsf{Monolith}$-$31$ and $\mathsf{Monolith}$-$64$, where the $2$-round attacks on $\mathsf{Monolith}$ are practical. In the semi-free-start (SFS) collision attack setting, we achieve practical SFS collision attacks on $3$-round $\mathsf{Tip5}$, $\mathsf{Tip4}$, and $\mathsf{Tip4}'$. Moreover, the SFS collision attacks can reach up to $4$-round $\mathsf{Tip4}$ and $3$-round $\mathsf{Monolith}$-$64$. As far as we know, this is the first third-party cryptanalysis of these hash functions, which improves the initial analysis given by the designers.
Expand
Joonas Ahola, Iván Blanco-Chacón, Wilmar Bolaños, Antti Haavikko, Camilla Hollanti, Rodrigo M. Sánchez-Ledesma
ePrint Report ePrint Report
We prove the equivalence between the Ring Learning With Errors (RLWE) and the Polynomial Learning With Errors (PLWE) problems for the maximal totally real subfield of the $2^r 3^s$-th cyclotomic field for $r \geq 3$ and $s \geq 1$. Moreover, we describe a fast algorithm for computing the product of two elements in the ring of integers of these subfields. This multiplication algorithm has quasilinear complexity in the dimension of the field, as it makes use of the fast Discrete Cosine Transform (DCT). Our approach assumes that the two input polynomials are given in a basis of Chebyshev-like polynomials, in contrast to the customary power basis. To validate this assumption, we prove that the change of basis from the power basis to the Chebyshev-like basis can be computed with $\mathcal{O}(n \log n)$ arithmetic operations, where $n$ is the problem dimension. Finally, we provide a heuristic and theoretical comparison of the vulnerability to some attacks for the $p$-th cyclotomic field versus the maximal totally real subextension of the $4p$-th cyclotomic field for a reasonable set of parameters of cryptographic size.
Expand
Binwu Xiang, Jiang Zhang, Kaixing Wang, Yi Deng, Dengguo Feng
ePrint Report ePrint Report
Recent attacks on NTRU lattices given by Ducas and van Woerden (ASIACRYPT 2021) showed that for moduli $q$ larger than the so-called fatigue point $n^{2.484+o(1)}$, the security of NTRU is noticeably less than that of (ring)-LWE. Unlike NTRU-based PKE with $q$ typically lying in the secure regime of NTRU lattices (i.e., $q
In this paper, we first propose a (matrix) NTRU-based MK-FHE for super-constant number $k$ of keys without using overstretched NTRU parameters. Our scheme is essentially a combination of two components following the two-layer framework of TFHE/FHEW: - a simple first-layer matrix NTRU-based encryption that naturally supports multi-key NAND operations with moduli $q=O(k\cdot n^{1.5})$ only linear in the number $k$ of keys; -and a crucial second-layer NTRU-based encryption that supports an efficient hybrid product between a single-key ciphertext and a multi-key ciphertext for gate bootstrapping.

Then, by replacing the first-layer with a more efficient LWE-based multi-key encryption, we obtain an improved MK-FHE scheme with better performance. We also employ a light key-switching technique to reduce the key-switching key size from the previous $O(n^2)$ bits to $O(n)$ bits. A proof-of-concept implementation shows that our two MK-FHE schemes outperform the state-of-the-art TFHE-like MK-FHE schemes in both computation efficiency and bootstrapping key size. Concretely, for $k=8$ at the same 100-bit security level, our improved MK-FHE scheme can bootstrap a ciphertext in {0.54s} on a laptop and only has a bootstrapping key of size {13.89}MB,which are respectively 2.2 times faster and 7.4 times smaller than the MK-FHE scheme (which relies on a second-layer encryption from the ring-LWE assumption) due to Chen, Chillotti and Song (ASIACRYPT 2019).
Expand
Eliana Carozza, Geoffroy Couteau
ePrint Report ePrint Report
We investigate the feasibility of constructing threshold signature schemes from the MPC-in-the-head paradigm. Our work addresses the significant challenge posed by recent impossibility results (Doerner et al., Crypto’24), which establish inherent barriers to efficient thresholdization of such schemes without compromising their security or significantly increasing the signature size. - We introduce a general methodology to adapt any MPC-in-the-head signature into a threshold-friendly scheme, ensuring that the dependency on the number of users $n$ grows as $\lambda^2n + O(1)$. This represents a substantial improvement over the naive concatenation of independent signatures. - We present a threshold signature scheme on top of the scheme of (Carozza, Couteau and Joux, EUROCRYPT’23). Our security analysis introduces the notion of Corruptible Existential Unforgeability under Chosen Message Attacks (CEUF-CMA), which formalizes resilience against adversarial control over parts of the randomness. Our results provide a new perspective on the trade-offs between efficiency and security in threshold settings, opening pathways for future improvements in post-quantum threshold cryptography.
Expand

22 November 2024

Yu Tao, Lu Zhou, Lei Xie, Dongming Zhang, Xinyu Lei, Fei Xu, Zhe Liu
ePrint Report ePrint Report
Sharding emerges as a promising solution to enhance blockchain scalability. However, it faces two critical limitations during shard reconfiguration: (1) the TPS-Degradation issue, arising from ledger synchronization conflicts during transaction processing, and (2) the Zero-TPS issue, caused by disruptions in transaction processing due to key negotiation. To this end, we propose Shardora, a blockchain sharding system for scaling blockchain by unleashing parallelism. In Shardora, we implement two essential mechanisms: (1) A parallelized dual committee framework with a reputation mechanism to mitigate the TPS-Degradation issue while ensuring system security. (2) A parallelized key pre-negotiation mechanism with a secret-reuse strategy to avoid the Zero-TPS issue while maintaining a continuously high TPS. We prove that Shardora offers theory-guaranteed security. We implement a prototype of Shardora and deploy it on Alibaba Cloud. Experimental results demonstrate that Shardora addresses the limitations by significantly reducing the overhead of both ledger synchronization and key negotiation, which outperforms state-of-the-art sharding schemes by at least 90%. In addition, Shardora shows its superior performance in terms of throughput and latency, achieving a peak throughput of 8300 TPS on a single shard with 600 nodes under LAN conditions. The code of Shardora is publicly available on GitHub.
Expand
Beatrice Biasioli, Elena Kirshanova, Chiara Marcolla, Sergi Rovira
ePrint Report ePrint Report
The field of fully homomorphic encryption (FHE) has seen many theoretical and computational advances in recent years, bringing the technology closer to practicality than ever before. For this reason, practitioners in related fields, such as machine learning, are increasingly interested in using FHE to provide privacy to their applications.

Despite this progress, selecting secure and efficient parameters for FHE remains a complex and challenging task due to the intricate interdependencies between parameters. In this work, we address this issue by providing a rigorous theoretical foundation for parameter selection for any LWE-based schemes, with a specific focus on FHE. Our approach starts with an in-depth analysis of lattice attacks on the LWE problem, deriving precise expressions for the most effective ones. Building on this, we introduce closed-form formulas that establish the relationships among the LWE parameters.

In addition, we introduce a numerical method to enable the accurate selection of any configurable parameter to meet a desired security level. Finally, we use our results to build a practical and efficient tool for researchers and practitioners deploying FHE in real-world applications, ensuring that our approach is both rigorous and accessible.
Expand
◄ Previous Next ►