International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

24 October 2021

Yang Wang, Yanmin Zhao, Mingqiang Wang
ePrint Report ePrint Report
Proxy re-encryption (PRE) schemes, which nicely solve the problem of delegating decryption rights, enable a semi-trusted proxy to transform a ciphertext encrypted under one key into a ciphertext of the same message under another arbitrary key. For a long time, the semantic security of PREs is quite similar to that of public key encryption (PKE) schemes. Cohen first pointed out the insufficiency of the security under chosen-plaintext attacks (CPA) of PREs in PKC 2019, and proposed a {\it{strictly stronger}} security notion, named security under honest re-encryption attacks (HRA), of PREs. Surprisingly, a few PREs satisfy the stronger HRA security and almost all of them are paring-based till now. To the best of our knowledge, we present the first detailed construction of HRA secure PREs based on standard LWE problems with {\it{comparable small and polynomially-bounded}} parameters in this paper. Combing known reductions, the HRA security of our PREs could also be guaranteed by the worst-case basic lattice problem (e.g. SIVP$_{\gamma}$). Our single-hop PRE schemes could be easily extended to the multi-hop case (as long as the maximum hop $L=O(1)$). Meanwhile, our single-hop PRE schemes are also key-private, which means that the implicit identities of a re-encryption key will not be revealed even in the case of a proxy colluding with some corrupted users. Some discussions about key-privacy of multi-hop PREs are also proposed, which indicates that several constructions of multi-hop PREs do not satisfy their key-privacy definitions.
Expand
Matteo Campanelli, Bernardo David, Hamidreza Khoshakhlagh, Anders Konring, Jesper Buus Nielsen
ePrint Report ePrint Report
A number of recent works have constructed cryptographic protocols with flavors of adaptive security by having a randomly-chosen anonymous committee run at each round. Since most of these protocols are stateful, transferring secret states from past committees to future, but still unknown, committees is a crucial challenge. Previous works have tackled this problem with approaches tailor-made for their specific setting, which mostly rely on using a blockchain to orchestrate auxiliary committees that aid in state hand-over process. In this work, we look at this challenge as an important problem on its own and initiate the study of Encryption to the Future (EtF) as a cryptographic primitive. First, we define a notion of a non-interactive EtF scheme where time is determined with respect to an underlying blockchain and a lottery selects parties to receive a secret message at some point in the future. While this notion seems overly restrictive, we establish two important facts: 1. if used to encrypt towards parties selected in the ``far future'', EtF implies witness encryption for NP over a blockchain; 2. if used to encrypt only towards parties selected in the ``near future'', EtF is not only sufficient for transferring state among committees as required by previous works but also captures previous tailor-made solutions. Inspired by these results, we provide a novel construction of EtF based on witness encryption over commitments (cWE), which we instantiate from a number of standard assumptions via a construction based on generic cryptographic primitives. Finally, we show how to lift ``near future'' EtF to ``far future'' EtF with a protocol based on an auxiliary committee whose communication complexity is independent from the length of plaintext messages being sent to the future.
Expand
Jan-Pieter D'Anvers, Daniel Heinz, Peter Pessl, Michiel van Beirendonck, Ingrid Verbauwhede
ePrint Report ePrint Report
Checking the equality of two arrays is a crucial building block of the Fujisaki-Okamoto transformation, and as such it is used in several post-quantum key encapsulation mechanisms including Kyber and Saber. While this comparison operation is easy to perform in a black box setting, it is hard to efficiently protect against side-channel attacks. For instance, the hash-based method by Oder et al. is limited to first-order masking, a higher-order method by Bache et al. was shown to be flawed, and a very recent higher-order technique by Bos et al. suffers in runtime. In this paper, we first demonstrate that the hash-based approach, and likely many similar first-order techniques, succumb to a relatively simple side-channel collision attack. We can successfully recover a Kyber512 key using just 6000 traces. While this does not break the security claims, it does show the need for efficient higher-order methods. We then present a new higher-order masked comparison algorithm based on the (insecure) higher-order method of Bache et al. Our new method is 4.2x, resp. 7.5x, faster than the method of Bos et al. for a 2nd, resp. 3rd, -order masking on the ARM Cortex-M4, and unlike the method of Bache et al., the new technique takes ciphertext compression into account, We prove correctness, security, and masking security in detail and provide performance numbers for 2nd and 3rd-order implementations. Finally, we verify our the side-channel security of our implementation using the test vector leakage assessment (TVLA) methodology.
Expand
Aleksei Udovenko, Giuseppe Vitto
ePrint Report ePrint Report
We report a break of the $IKEp182 challenge using a meet-in-the-middle attack strategy improved with multiple SIKE-specific optimizations. The attack was executed on the HPC cluster of the University of Luxembourg and required less than 10 core-years and 256TiB of high-performance network storage (GPFS). Different trade-offs allow execution of the attack with similar time complexity and reduced storage requirements of only about 70TiB.
Expand
Fabian Hertel, Nicolas Huber, Jonas Kittelberger, Ralf Kuesters, Julian Liedtke, Daniel Rausch
ePrint Report ePrint Report
Modern electronic voting systems (e-voting systems) are designed to achieve a variety of security properties, such as verifiability, accountability, and vote privacy. Some of these systems aim at so-called tally-hiding: they compute the election result, according to some result function, like the winner of the election, without revealing any other information to any party. In particular, if desired, they neither reveal the full tally consisting of all (aggregated or even individual) votes nor parts of it, except for the election result, according to the result function. Tally-hiding systems offer many attractive features, such as strong privacy guarantees both for voters and for candidates, and protection against Italian attacks. The Ordinos system is a recent provably secure framework for accountable tally-hiding e-voting that extends Helios and can be instantiated for various election methods and election result functions. So far, practical instantiations and implementations for only rather simple result functions (e.g., computing the $k$ best candidates) and single/multi-vote elections have been developed for Ordinos.

In this paper, we propose and implement several new Ordinos instantiations in order to support Borda voting, the Hare-Niemeyer method for proportional representation, multiple Condorcet methods, and Instant-Runoff Voting. Our instantiations, which are based on suitable secure multi-party computation (MPC) components, offer the first tally-hiding implementations for these voting methods. To evaluate the practicality of our MPC components and the resulting e-voting systems, we provide extensive benchmarks for all our implementations.
Expand
Lucjan Hanzlik, Daniel Slamanig
ePrint Report ePrint Report
Anonymous credentials (ACs) are a powerful cryptographic tool for the secure use of digital services, when simultaneously aiming for strong privacy guarantees of users combined with strong authentication guarantees for providers of services. They allow users to selectively prove possession of attributes encoded in a credential without revealing any other meaningful information about themselves. While there is a significant body of research on AC systems, modern use-cases of ACs such as mobile applications come with various requirements not sufficiently considered so far. These include preventing the sharing of credentials and coping with resource constraints of the platforms (e.g., smart cards such as SIM cards in smartphones). Such aspects are typically out of scope of AC constructions, and, thus AC systems that can be considered entirely practical have been elusive so far.

In this paper we address this problem by introducing and formalizing the notion of core/helper anonymous credentials (CHAC). The model considers a constrained core device (e.g., a SIM card) and a powerful helper device (e.g., a smartphone). The key idea is that the core device performs operations that do not depend on the size of the credential or the number of attributes, but at the same time the helper device is unable to use the credential without its help. We present a provably secure generic construction of CHACs using a combination of signatures with flexible public keys (SFPK) and the novel notion of aggregatable attribute-based equivalence class signatures (AAEQ) along with a concrete instantiation. The key characteristics of our scheme are that the size of showing tokens is independent of the number of attributes in the credential(s) and that the core device only needs to compute a single elliptic curve scalar multiplication, regardless of the number of attributes. We confirm the practical efficiency of our CHACs with an implementation of our scheme on a Multos smart card as the core and an Android smartphone as the helper device. A credential showing requires less than 500 ms on the smart card and around 200 ms on the smartphone (even for a credential with 1000 attributes).
Expand
Qi Lei, Zijia Yang, Qin Wang, Yaoling Ding, Zhe Ma , An Wang
ePrint Report ePrint Report
Deep learning (DL)-based profiled attack has been proved to be a powerful tool in side-channel analysis. A variety of multi-layer perception (MLP) networks and convolutional neural networks (CNN) are thereby applied to cryptographic algorithm implementations for exploiting correct keys with a smaller number of traces and a shorter time. However, these attacks merely focus on small datasets, in which their points of interest are well-trimmed for attacks. Countermeasures applied in embedded systems always result in high-dimensional side-channel traces, i.e., the high-dimension of each input trace. Time jittering and random delay techniques introduce desynchronization but increase SCA complexity as well. These traces inevitably require complicated designs of neural networks and large sizes of trainable parameters for exploiting the correct keys. Therefore, performing profiled attacks (directly) on high-dimensional datasets is difficult.

To bridge this gap, we propose a dimension reduction tool for high-dimensional traces by combining signal-to-noise ratio (SNR) analysis and autoencoder. With the designed asymmetric undercomplete autoencoder (UAE) architecture, we extract a small group of critical features from numerous time samples. The compression rate by using our UAE method reaches 40x on synchronized datasets and 30x on desynchronized datasets. This preprocessing step facilitates the profiled attacks by extracting potential leakage features. To demonstrate its effectiveness, we evaluate our proposed method on the raw ASCAD dataset with 100,000 samples in each trace. We also derive desynchronized datasets from the raw ASCAD dataset and validate our method under random delay effect. As current MLP and CNN structures cannot exploit the S-box leakage either before or after autoencoder preprocessed traces, here, we further propose a $2^n$-structure MLP network as the attack model. By applying UAE and $2^n$-structure MLP network on these traces, experimental results show that all correct subkeys on synchronized datasets (16 S-boxes) and desynchronized datasets are successfully revealed within hundreds of seconds. This shows that our autoencoder can significantly facilitate DL-based profiled attacks on high-dimensional datasets.
Expand
Koji Nuida
ePrint Report ePrint Report
We consider a setting of two-party computation between a server and a client where every message received by the server is encrypted by a fully homomorphic encryption (FHE) scheme and its decryption key is held by the client only. Akavia and Vald (IACR ePrint Archive, 2021) revisited the privacy problem in such protocols against malicious servers and revealed (as opposed to a naive expectation) that under certain condition, a malicious server can recover the client's input even if the underlying FHE scheme is IND-CPA secure. They also gave some sufficient conditions for the FHE scheme to ensure the privacy against malicious servers. However, their argument did not consider the possibility that a query from a malicious server to a client involves an invalid ciphertext. In this paper, we show, by giving a concrete example, that if such an invalid query is just rejected by the client, then the sufficient conditions in Akavia and Vald's result do not in general ensure the privacy against malicious servers. We also propose another option to handle an invalid query in a way that the client returns a random ciphertext (without explicitly rejecting the query), and show that such a protocol is private against malicious servers if the underlying FHE scheme is IND-CPA secure, sanitizable (in the sense of Ducas and Stehl\'{e}, EUROCRYPT 2016), and circular secure.
Expand
Ben Marshall, Dan Page
ePrint Report ePrint Report
Supporting masking countermeasures for non-invasive side-channel security in instructions set architectures is a hard problem. Masked operations often have a large number of inputs and outputs, and enabling portable higher order masking has remained a difficult. However, there are clear benefits to enabling this in terms of performance, code density and security guarantees. We present SME, an instruction set extension for enabling secure and efficient software masking of cryptographic code at higher security orders. Our design improves on past work by enabling the same software to run at higher masking orders, depending on the level of security the CPU/SoC implementer has deemed appropriate for their product or device at design time. Our approach relies on similarities between implementations of higher order masking schemes and traditional vector programming. It greatly simplifies the task of writing masked software, and restores the basic promise of ISAs: that the same software will run correctly and securely on any correctly implemented CPU with the necessary security guarantees. We describe our concept as a custom extension to the RISC-V ISA, and its soon to be ratified scalar cryptography extension. An example implementation is also described, with performance and area tradeoffs detailed for several masking security orders. To our knowledge, ours is the first example of enabling flexible side-channel secure implementations of the official RISC-V lightweight cryptography instructions.
Expand
Aayush Jain, Alexis Korb, Paul Lou, Amit Sahai
ePrint Report ePrint Report
We initiate the study of a problem called the Polynomial Independence Distinguishing Problem (PIDP). The problem is parameterized by a set of polynomials $\mathcal{Q}=(q_1,\ldots, q_m)$ where each $q_i:\mathbb{R}^n\rightarrow \mathbb{R}$ and an input distribution $\mathcal{D}$ over the reals. The goal of the problem is to distinguish a tuple of the form $\{ q_i,q_i(\mathbf{x})\}_{i\in [m]} $ from $\{ q_i,q_i(\mathbf{x}_i)\}_{i\in [m]} $ where $\mathbf{x}, \mathbf{x}_1,\ldots , \mathbf{x}_m$ are each sampled independently from the distribution $\mathcal{D}^n$. Refutation and search versions of this problem are conjectured to be hard in general for polynomial time algorithms (Feige, STOC 02) and are also subject to known theoretical lower bounds for various hierarchies (such as Sum-of-Squares and Sherali-Adams). Nevertheless, we show polynomial time distinguishers for the problem in several scenarios, including settings where such lower bounds apply to the search or refutation versions of the problem. Our results apply to the setting when each polynomial is a constant degree multilinear polynomial. We show that this natural problem admits polynomial time distinguishing algorithms for the following scenarios: Non-trivial Distinguishers: We define a non-trivial distinguisher to be an algorithm that runs in time $n^{O(1)}$ and distinguishes between the two distributions with probability at least $n^{-O(1)}$. We show that such non-trivial distinguishers exist for large classes of worst-case families of polynomials, and essentially any non-trivial input distribution that is symmetric around zero, and isn't equivalent to a distribution over Boolean values.

In particular, we show that when $m\geq n$ and the sets of indices corresponding to the variables present in each monomial exhibit a weak expansion property with expansion factor greater than $1/2$ for unions of at most $4$ sets, then a non-trivial distinguisher exists.

Overwhelming Distinguishers: Next we consider the problem of amplifying the success probability of the distinguisher, to guarantee that it succeeds with probability $1-n^{-\omega(1)}$. We obtain such an overwhelming distinguisher for natural random classes of homogeneous multilinear constant degree $d$ polynomials, denoted by $\mathcal{Q}_{n,d,p}$, and natural input distributions $\mathcal{D}$ such as discrete Gaussians or uniform distributions over bounded intervals. The polynomials are chosen by independently sampling each coefficient to be $0$ with probability $p$ and uniformly from $\cD$ otherwise. For these polynomials, we show a surprisingly simple distinguisher that requires $p> n\log n/\binom{n}{d}$ and $m\geq \tilde{O}(n^{2})$ samples, independent of the degree $d$. This is in contrast with the setting for refutation, where we have sum-of-squares lower bounds against constant degree sum-of-squares algorithms (Grigoriev, TCS 01; Schoenebeck, FOCS 08) for this parameter regime for degree $d>6$.
Expand
Guilherme Perin, Lichao Wu, Stjepan Picek
ePrint Report ePrint Report
One of the main promoted advantages of deep learning in profiling side-channel analysis is the possibility of skipping the feature engineering process. Despite that, most recent publications consider feature selection as the attacked interval from the side-channel measurements is pre-selected. This is similar to the worst-case security assumptions in security evaluations when the random secret shares (e.g., mask shares) are known during the profiling phase: an evaluator can identify points of interest locations and efficiently trim the trace interval. To broadly understand how feature selection impacts the performance of deep learning-based profiling attacks, this paper investigates four different feature selection scenarios that could be realistically used in practical security evaluations. The scenarios range from the minimum possible number of features to the whole available trace samples.

Our results emphasize that deep neural networks as profiling models show successful key recovery independently of explored feature selection scenarios against first-order masked software implementations of AES 128. Concerning the number of features, we found three main observations: 1) scenarios with less carefully selected point-of-interest and larger attacked trace intervals are the ones with better attack performance in terms of the required number of traces during the attack phase; 2) optimizing and reducing the number of features does not necessarily improve the chances to find good models from the hyperparameter search; and 3) in all explored feature selection scenarios, the random hyperparameter search always indicate a successful model with a single hidden layer for MLPs and two hidden layers for CNNs, which questions the reason for using complex models for the considered datasets. Our results demonstrate the key recovery with a single attack trace for all datasets for at least one of the feature selection scenarios.
Expand
Caspar Schwarz-Schilling, Joachim Neu, Barnabé Monnot, Aditya Asgaonkar, Ertem Nusret Tas, David Tse
ePrint Report ePrint Report
Recently, two attacks were presented against Proof-of-Stake (PoS) Ethereum: one where short-range reorganizations of the underlying consensus chain are used to increase individual validators' profits and delay consensus decisions, and one where adversarial network delay is leveraged to stall consensus decisions indefinitely. We provide refined variants of these attacks, considerably relaxing the requirements on adversarial stake and network timing, and thus rendering the attacks more severe. Combining techniques from both refined attacks, we obtain a third attack which allows an adversary with vanishingly small fraction of stake and no control over network message propagation (assuming instead probabilistic message propagation) to cause even long-range consensus chain reorganizations. Honest-but-rational or ideologically motivated validators could use this attack to increase their profits or stall the protocol, threatening incentive alignment and security of PoS Ethereum. The attack can also lead to destabilization of consensus from congestion in vote processing.
Expand
Hyesun Kwak, Dongwon Lee, Yongsoo Song, Sameer Wagh
ePrint Report ePrint Report
Homomorphic Encryption (HE), first demonstrated in 2009, is a class of encryption schemes that enables computation over encrypted data. Recent advances in the design of better protocols have led to the development of two different lines of HE schemes -- Multi-Party Homomorphic Encryption (MPHE) and Multi-Key Homomorphic Encryption (MKHE). These primitives cater to different applications as each approach has its own pros and cons. At a high level, MPHE schemes tend to be much more efficient but require the set of computing parties to be fixed throughout the entire operation, frequently a limiting assumption. On the other hand, MKHE schemes tend to have poor scaling (quadratic) with the number of parties but allow us to add new parties to the joint computation anytime since they support computation between ciphertexts under different keys.

In this work, we formalize a new variant of HE called Multi-Group Homomorphic Encryption (MGHE). Stated informally, an MGHE scheme provides a seamless integration between MPHE and MKHE, thereby enjoying the best of both worlds. In this framework, a group of parties generates a public key jointly which results in the compactness of ciphertexts and the efficiency of homomorphic operations similar to MPHE. However, unlike MPHE, it also supports computations on encrypted data under different keys similar to MKHE.

We provide the first construction of such an MGHE scheme from BFV and demonstrate experimental results. More importantly, the joint public key generation procedure of our scheme is fully non-interactive so that the set of computing parties does not have to be determined and no information about other parties is needed in advance of individual key generation. At the heart of our construction is a novel re-factoring of the relinearization key.
Expand
Long Meng, Liqun Chen
ePrint Report ePrint Report
Time-stamping services produce time-stamp tokens as evidence to prove that digital data existed at given points in time. Time-stamp tokens contain verifiable cryptographic bindings between data and time, which are produced using cryptographic algorithms. In the ANSI, ISO/IEC and IETF standards for time-stamping services, cryptographic algorithms are addressed in two aspects: (i) Client-side hash functions used to hash data into digests for nondisclosure. (ii) Server-side algorithms used to bind the time and digests of data. These algorithms are associated with limited lifespans due to their operational life cycles and increasing computational powers of attackers. After the algorithms are compromised, time-stamp tokens using the algorithms are no longer trusted. The ANSI and ISO/IEC standards provide renewal mechanisms for time-stamp tokens. However, the renewal mechanisms for client-side hash functions are specified ambiguously, that may lead to the failure of implementations. Besides, in existing papers, the security analyses of long-term time-stamping schemes only cover the server-side renewal, and the client-side renewal is missing. In this paper, we analyse the necessity of client-side renewal, and propose a comprehensive long-term time-stamping scheme that addresses both client-side renewal and server-side renewal mechanisms. After that, we formally analyse and evaluate the client-side security of our proposed scheme.
Expand
Bhaskar Roberts, Mark Zhandry
ePrint Report ePrint Report
The construction of public key quantum money based on standard cryptographic assumptions is a longstanding open question. Here we introduce franchised quantum money, an alternative form of quantum money that is easier to construct. Franchised quantum money retains the features of a useful quantum money scheme, namely unforgeability and local verification: anyone can verify banknotes without communicating with the bank. In franchised quantum money, every user gets a unique secret verification key, and the scheme is secure against counterfeiting and sabotage, a new security notion that appears in the franchised model. Finally, we construct franchised quantum money and prove security assuming one-way functions.
Expand
Ashrujit Ghoshal, Riddhi Ghosal, Joseph Jaeger, Stefano Tessaro
ePrint Report ePrint Report
This paper continues the study of memory-tight reductions (Auerbach et al, CRYPTO '17). These are reductions that only incur minimal memory costs over those of the original adversary, allowing precise security statements for memory-bounded adversaries (under appropriate assumptions expressed in terms of adversary time and memory usage). Despite its importance, only a few techniques to achieve memory-tightness are known and impossibility results in prior works show that even basic, textbook reductions cannot be made memory-tight.

This paper introduces a new class of memory-tight reductions which leverage random strings in the interaction with the adversary to hide state information, thus shifting the memory costs to the adversary.

We exhibit this technique with several examples. We give memory-tight proofs for digital signatures allowing many forgery attempts when considering randomized message distributions or probabilistic RSA-FDH signatures specifically. We prove security of the authenticated encryption scheme Encrypt-then-PRF with a memory-tight reduction to the underlying encryption scheme. By considering specific schemes or restricted definitions we avoid generic impossibility results of Auerbach et al. (CRYPTO '17) and Ghoshal et al. (CRYPTO '20).

As a further case study, we consider the textbook equivalence of CCA-security for public-key encryption for one or multiple encryption queries. We show two qualitatively different memory-tight versions of this result, depending on the considered notion of CCA security.
Expand
Maikel Kerkhof, Lichao Wu, Guilherme Perin, Stjepan Picek
ePrint Report ePrint Report
The deep learning-based side-channel analysis represents one of the most powerful side-channel attack approaches. Thanks to its capability in dealing with raw features and countermeasures, it becomes the de facto standard evaluation method for the evaluation labs/certification schemes. To reach this performance level, recent works significantly improved the deep learning-based attacks from various perspectives, like hyperparameter tuning, design guidelines, or custom neural network architecture elements. Still, limited attention has been given to the core of the learning process - the loss function.

This paper analyzes the limitations of the existing loss functions and then proposes a novel side-channel analysis-optimized loss function: Focal Loss Ratio (FLR), to cope with the identified drawbacks observed in other loss functions. To validate our design, we 1) conduct a thorough experimental study considering various scenarios (datasets, leakage models, neural network architectures) and 2) compare with other loss functions commonly used in the deep learning-based side-channel analysis (both ``traditional'' one and those designed for side-channel analysis). Our results show that FLR loss outperforms other loss functions in various conditions while not having computation overheads compared to common loss functions like categorical cross-entropy.
Expand
Keitaro Hashimoto, Shuichi Katsumata, Eamonn Postlethwaite, Thomas Prest, Bas Westerbaan
ePrint Report ePrint Report
Continuous group key agreements (CGKAs) are a class of protocols that can provide strong security guarantees to secure group messaging protocols such as Signal and MLS. Protection against device compromise is provided by commit messages: at a regular rate, each group member may refresh their key material by uploading a commit message, which is then downloaded and processed by all the other members. In practice, propagating commit messages dominates the bandwidth consumption of existing CGKAs.

We propose Chained CmPKE, a CGKA with an asymmetric bandwidth cost: in a group of $N$ members, a commit message costs $O(N)$ to upload and $O(1)$ to download, for a total bandwidth cost of $O(N)$. In contrast, TreeKEM [19, 24, 76] costs $\Omega(\log N)$ in both directions, for a total cost $\Omega(N\log N)$. Our protocol relies on generic primitives, and is therefore readily post-quantum.

We go one step further and propose post-quantum primitives that are tailored to Chained CmPKE, which allows us to cut the growth rate of uploaded commit messages by two or three orders of magnitude compared to naive instantiations. Finally, we realize a software implementation of Chained CmPKE. Our experiments show that even for groups with a size as large as $N = 2^{10}$, commit messages can be computed and processed in less than 100 ms.
Expand
Veronika Kuchta, Joseph K. Liu
ePrint Report ePrint Report
In this paper, we formally prove the non-slanderability property of the first linkable ring signature paper in ACISP 2004 (in which the notion was called linkable spontaneous anonymous group signature (LSAG)). The rigorous security analysis will give confidence to any future construction of Ring Confidential Transaction (RingCT) protocol for blockchain systems which may use this signature scheme as the basis.
Expand
Tianyu Zheng, Shang Gao, Bin Xiao, Yubo Song
ePrint Report ePrint Report
In this paper, we propose any-out-of-many proofs, a logarithmic zero-knowledge scheme for proving knowledge of arbitrarily many secrets out of a public list. Unlike existing $k$-out-of-$N$ proofs [S\&P'21, CRYPTO'21], our approach also hides the exact amount of secrets $k$, which can be used to achieve a higher anonymity level. Furthermore, we enhance the efficiency of our scheme through a transformation that can adopt the improved inner product argument in Bulletproofs [S\&P'18], only $2 \cdot \lceil log_2(N) \rceil + 13$ elements need to be sent in a non-interactive proof.

We further use our proof scheme to implement both multiple ring signature schemes and RingCT protocols. For multiple ring signatures, we need to add a boundary constraint for the number $k$ to avoid the proof of an empty secret set. Thus, an improved version called bounded any-out-of-many proof is presented, which preserves all nice features of the original protocol such as high anonymity and logarithmic size. As for the RingCT, both the original and bounded proofs can be used safely. The result of the performance evaluation indicates that our RingCT protocol is more efficient and secure than others. We also believe our techniques are applicable in other privacy-preserving occasions.
Expand
◄ Previous Next ►