IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
01 September 2018
Martin Ekerå
In this paper, we generalize and bridge Shor's groundbreaking works on computing orders and general discrete logarithms, our earlier works on computing short discrete logarithms with tradeoffs and Seifert's work on computing orders with tradeoffs.
In particular, we demonstrate how the idea of enabling tradeoffs may be extended to the case of computing general discrete logarithms.
This yields a reduction by up to a factor of two in the number of group operations that need to be computed in each run of the quantum algorithm at the expense of having to run the algorithm multiple times.
Combined with our earlier works this implies that the number of group operations that need to be computed in each run of the quantum algorithm equals the number of bits in the logarithm times a small constant factor that depends on the tradeoff factor.
We comprehensively analyze the probability distribution induced by our quantum algorithm, describe how the algorithm may be simulated, and estimate the number of runs required to compute logarithms with a given minimum success probability for different tradeoff factors.
Our algorithm does not require the group order to be known. If the order is unknown, it may be computed at no additional quantum cost from the same set of outputs as is used to compute the logarithm.
In particular, we demonstrate how the idea of enabling tradeoffs may be extended to the case of computing general discrete logarithms.
This yields a reduction by up to a factor of two in the number of group operations that need to be computed in each run of the quantum algorithm at the expense of having to run the algorithm multiple times.
Combined with our earlier works this implies that the number of group operations that need to be computed in each run of the quantum algorithm equals the number of bits in the logarithm times a small constant factor that depends on the tradeoff factor.
We comprehensively analyze the probability distribution induced by our quantum algorithm, describe how the algorithm may be simulated, and estimate the number of runs required to compute logarithms with a given minimum success probability for different tradeoff factors.
Our algorithm does not require the group order to be known. If the order is unknown, it may be computed at no additional quantum cost from the same set of outputs as is used to compute the logarithm.
Lilya Budaghyan, Marco Calderini, Irene Villa
In the present paper we introduce some sufficient conditions and a procedure for checking whether, for a given function, CCZ-equivalence is more general than EA-equivalence together with taking inverses of permutations. It is known from Budaghyan, Carlet and Pott (2006) and Budaghyan, Carlet and Leander (2009) that for quadratic APN functions (both monomial and polynomial cases) CCZ-equivalence is more general. We prove hereby that for non-quadratic APN functions CCZ-equivalence can be more general (by studying the only known APN function which is CCZ-inequivalent to both power functions and quadratics). On the contrary, we prove that for pawer no-Gold APN functions, CCZ equivalence coincides with EA-equivalence and inverse transformation for $n\le 8$. We conjecture that this is true for any $n$.
Fangguo Zhang, Shengli Liu
We provide a new approach to the elliptic curve discrete logarithm problem (ECDLP). First, we construct Elliptic Codes (EC codes) from the ECDLP. Then we propose an algorithm of finding the minimum weight codewords for algebraic geometry codes, especially for the elliptic code, via list decoding. Finally, with the minimum weight codewords, we show how to solve ECDLP. This work may provide a potential approach to speeding up the computation of ECDLP
Louis Goubin, Francisco Vial-Prado
Can Bob give Alice his decryption secret and be convinced that she will not give it to someone else? This is achieved by a proxy re-encryption scheme where Alice does not have Bobs secret but instead
she can transform ciphertexts in order to decrypt them with her own key. In this article, we answer this question in a different perspective, relying on a property that can be found in the well-known modified NTRU encryption scheme. We show how parties can collaborate to one-way-glue their secret-keys together, giving Alices secret-key the additional ability to decrypt Bobs ciphertexts. The main advantage is that the proto cols we propose can be plugged directly to the modified NTRU scheme with no post-key-generation space or time costs, nor any modification of ciphertexts. In addition, this property translates to the NTRU-based multikey homomorphic scheme, allowing to equip a hierarchic chain of users with automatic re-encryption of messages and supporting homomorphic operations of ciphertexts. To achieve this, we propose two-party computation protocols in cyclotomic polynomial rings. We base the security in presence of various types of adversaries on the RLWE and DSPR assumptions, and on two new problems in the modified NTRU ring.
Tetsu Iwata, Virginie Lallemand, Gregor Leander, Yu Sasaki
This article presents universal forgery and multiple forgeries against MergeMAC that has been recently proposed to fit scenarios where bandwidth is limited and where strict time constraints apply. MergeMAC divides an input message into two parts, $m\|\tilde{m}$, and its tag is computed by $\mathcal{F}( \mathcal{P}_1(m) \oplus \mathcal{P}_2(\tilde{m}) )$, where $\mathcal{P}_1$ and $\mathcal{P}_2$ are PRFs and $\mathcal{F}$ is a public function. The tag size is 64 bits. The designers claim $64$-bit security and imply a risk of accepting beyond-birthday-bound queries.
This paper first shows that it is inevitable to limit the number of queries up to the birthday bound, because a generic universal forgery against CBC-like MAC can be adopted to MergeMAC.
Afterwards another attack is presented that works with a very few number of queries, 3 queries and $2^{58.6}$ computations of $\mathcal{F}$, by applying a preimage attack against weak $\mathcal{F}$, which breaks the claimed security.
The analysis is then generalized to a MergeMAC variant where $\mathcal{F}$ is replaced with a one-way function $\mathcal{H}$.
Finally, multiple forgeries are discussed in which the attacker's goal is to improve the ratio of the number of queries to the number of forged tags. It is shown that the attacker obtains tags of $q^2$ messages only by making $2q-1$ queries in the sense of existential forgery, and this is tight when $q^2$ messages have a particular structure. For universal forgery, tags for $3q$ arbitrary chosen messages can be obtained by making $5q$ queries.
This paper first shows that it is inevitable to limit the number of queries up to the birthday bound, because a generic universal forgery against CBC-like MAC can be adopted to MergeMAC.
Afterwards another attack is presented that works with a very few number of queries, 3 queries and $2^{58.6}$ computations of $\mathcal{F}$, by applying a preimage attack against weak $\mathcal{F}$, which breaks the claimed security.
The analysis is then generalized to a MergeMAC variant where $\mathcal{F}$ is replaced with a one-way function $\mathcal{H}$.
Finally, multiple forgeries are discussed in which the attacker's goal is to improve the ratio of the number of queries to the number of forged tags. It is shown that the attacker obtains tags of $q^2$ messages only by making $2q-1$ queries in the sense of existential forgery, and this is tight when $q^2$ messages have a particular structure. For universal forgery, tags for $3q$ arbitrary chosen messages can be obtained by making $5q$ queries.
Joppe W. Bos, Simon J. Friedberger
We show how to implement the Montgomery reduction algorithm for isogeny based cryptography such that it can utilize the "unsigned multiply accumulate accumulate long" instruction present on modern ARM architectures. This results in a practical speed-up of a factor 1.34 compared to the approach used by SIKE: the supersingular isogeny based submission to the ongoing post-quantum standardization effort.
Moreover, motivated by the recent work of Costello and Hisil (ASIACRYPT 2017), which shows that there is only a moderate degradation in performance when evaluating large odd degree isogenies, we search for more general supersingular isogeny friendly moduli. Using graphics processing units to accelerate this search we find many such moduli which allow for faster implementations on embedded devices. By combining these two approaches we manage to make the modular reduction 1.5 times as fast on a 32-bit ARM platform.
Moreover, motivated by the recent work of Costello and Hisil (ASIACRYPT 2017), which shows that there is only a moderate degradation in performance when evaluating large odd degree isogenies, we search for more general supersingular isogeny friendly moduli. Using graphics processing units to accelerate this search we find many such moduli which allow for faster implementations on embedded devices. By combining these two approaches we manage to make the modular reduction 1.5 times as fast on a 32-bit ARM platform.
Guilhem Castagnos, Fabien Laguillaumie, Ida Tucker
Functional encryption is a modern public-key cryptographic primitive allowing an encryptor to finely control the information revealed to recipients from a given ciphertext.
Abdalla, Bourse, De Caro, and Pointcheval (PKC 2015) were the first to consider functional encryption restricted to the class of linear functions, i.e. inner products.
Though their schemes are only secure in the selective model, Agrawal, Libert, and Stehlé (CRYPTO 16) soon provided adaptively secure schemes for the same functionality. These constructions, which rely on standard assumptions such as the Decision Diffie-Hellman (DDH), the Learning-with-Errors (LWE), and Paillier's Decision Composite Residuosity (DCR) problems, do however suffer of various practical drawbacks. Namely, the DCR based scheme only computes inner products modulo an RSA integer which is oversized for many practical applications, while the computation of inner products modulo a prime $p$ either requires, for their (DDH) based scheme, that the inner product be contained in a sufficiently small interval for decryption to be efficient, or, as in the LWE based scheme, suffers of poor efficiency due to impractical parameters.
In this paper, we provide adaptively secure functional encryption schemes for the inner product functionality which are both efficient and allow for the evaluation of unbounded inner products modulo a prime $p$. Our constructions rely on new natural cryptographic assumptions in a cyclic group containing a subgroup where the discrete logarithm (DL) problem is easy which extend Castagnos and Laguillaumie's assumption (RSA 2015) of a DDH group with an easy DL subgroup. Instantiating our generic construction using class groups of imaginary quadratic fields gives rise to the most efficient functional encryption for inner products modulo an arbitrary large prime $p$. One of our schemes outperforms the DCR variant of Agrawal et al.'s protocols in terms of size of keys and ciphertexts by factors varying between 2 and 20 for a 112-bit security.
In this paper, we provide adaptively secure functional encryption schemes for the inner product functionality which are both efficient and allow for the evaluation of unbounded inner products modulo a prime $p$. Our constructions rely on new natural cryptographic assumptions in a cyclic group containing a subgroup where the discrete logarithm (DL) problem is easy which extend Castagnos and Laguillaumie's assumption (RSA 2015) of a DDH group with an easy DL subgroup. Instantiating our generic construction using class groups of imaginary quadratic fields gives rise to the most efficient functional encryption for inner products modulo an arbitrary large prime $p$. One of our schemes outperforms the DCR variant of Agrawal et al.'s protocols in terms of size of keys and ciphertexts by factors varying between 2 and 20 for a 112-bit security.
31 August 2018
David Derler, Sebastian Ramacher, Daniel Slamanig
Double-authentication preventing signatures (DAPS) are a variant of digital signatures which have received considerable attention recently (Derler et al. EuroS&P 2018, Poettering AfricaCrypt 2018). They are unforgeable signatures in the usual sense and sign messages that are composed of an address and a payload. Their distinguishing feature is the property that signing two different payloads with respect to the same address allows to publicly extract the secret signing key from two such signatures.
DAPS are known in the factoring, the discrete logarithm and the lattice setting. The majority of the constructions are ad-hoc. Only recently, Derler et al. (EuroS&P 2018) presented the first generic construction that allows to extend any discrete logarithm based secure signatures scheme to DAPS. However, their scheme has the drawback that the number of potential addresses (the address space) used for signing is polynomially bounded (and in fact small) as the size of secret and the public keys of the resulting DAPS are linear in the address space. In this paper we overcome this limitation and present a generic construction of DAPS with constant size keys and signatures. Our techniques are not tailored to a specific algebraic setting and in particular allow us to construct the first DAPS without structured hardness assumptions, i.e., from symmetric key primitives, yielding a candidate for post-quantum secure DAPS.
DAPS are known in the factoring, the discrete logarithm and the lattice setting. The majority of the constructions are ad-hoc. Only recently, Derler et al. (EuroS&P 2018) presented the first generic construction that allows to extend any discrete logarithm based secure signatures scheme to DAPS. However, their scheme has the drawback that the number of potential addresses (the address space) used for signing is polynomially bounded (and in fact small) as the size of secret and the public keys of the resulting DAPS are linear in the address space. In this paper we overcome this limitation and present a generic construction of DAPS with constant size keys and signatures. Our techniques are not tailored to a specific algebraic setting and in particular allow us to construct the first DAPS without structured hardness assumptions, i.e., from symmetric key primitives, yielding a candidate for post-quantum secure DAPS.
Vladimir Kolesnikov
Two-party Secure Function Evaluation (SFE) allows two parties to evaluate a function known to both parties on their private inputs. In some settings, the input of one of the parties is the definition of the computed function, and requires protection as well. The standard solution for SFE of private functions (PF-SFE) is to rely on Universal Circuits (UC), which can be programmed to implement any circuit of size s. Recent UC optimizations report the cost of UC for s-gate Boolean circuits is \approx 5s log s.
Instead, we consider garbling that allows evaluating one of a given set S of circuits. We show how to evaluate one of the circuits in S at the communication cost comparable to that of evaluating the largest circuit in S. In other words, we show how to omit generating and sending inactive GC branches. Our main insight is that a garbled circuit is just a collection of garbled tables, and as such can be reused to emulate the throw-away computation of an inactive execution branch without revealing to the Evaluator whether it evaluates active or inactive branch.
This cannot be proven within the standard BHR garbled circuits framework because the function description is inseparable from the garbling by denition. We carefully extend BHR in a general way, introducing topology-decoupling circuit garbling. We preserve all existing constructions and proofs of the BHR framework, while allowing this and other future constructions which may treat garbled tables separately from function description.
Our construction is presented in the semi-honest model.
Instead, we consider garbling that allows evaluating one of a given set S of circuits. We show how to evaluate one of the circuits in S at the communication cost comparable to that of evaluating the largest circuit in S. In other words, we show how to omit generating and sending inactive GC branches. Our main insight is that a garbled circuit is just a collection of garbled tables, and as such can be reused to emulate the throw-away computation of an inactive execution branch without revealing to the Evaluator whether it evaluates active or inactive branch.
This cannot be proven within the standard BHR garbled circuits framework because the function description is inseparable from the garbling by denition. We carefully extend BHR in a general way, introducing topology-decoupling circuit garbling. We preserve all existing constructions and proofs of the BHR framework, while allowing this and other future constructions which may treat garbled tables separately from function description.
Our construction is presented in the semi-honest model.
Marcos A. Simplicio Jr., Eduardo Lopes Cominetti, Harsh Kupwade Patil, Jefferson E. Ricardini, Leonardo T. D. Ferraz, Marcos Vinicius M. Silva
Vehicular communication (V2X) technologies are expected to be common in the future, providing better transportation safety and efficiency. However, their large-scale deployment requires addressing some challenges. In particular, to prevent abuse by drivers and by the system itself, V2X architectures must: (1) ensure the authenticity of messages, which is usually accomplished by means of digital certification; and (2) preserve the privacy of honest users, so owners of non-revoked certificates cannot be easily identified or tracked by eavesdroppers. A promising solution for managing V2X-oriented certificates in an efficient manner is the Security Credential Management System (SCMS), which is among the main candidates for standardization in the United States. In this paper, aiming to enhance and address issues in the SCMS architecture, we provide three main contributions. First, we describe and fix two birthday attacks against SCMS's certificate revocation process, thus preventing the system's security degradation with the number of issued and revoked certificates. In addition, we describe a mechanism for improving the flexibility of revocation, allowing certificates and their owner's privacy to be temporarily revoked in an efficient manner; this functionality is useful, for example, in case of vehicle theft or kidnapping. Finally, we propose a method that simplifies SCMS's system architecture, removing the need for the so-called Linkage Authorities (LAs); this not only results in cost reductions for SCMS's implementation, but also improves its security and privacy due to the removal of one potential point of failure/collusion.
Hao Chen, Zhicong Huang, Kim Laine, Peter Rindal
Private Set Intersection (PSI) allows two parties, the sender and the receiver, to compute the intersection of their private sets without revealing extra information to each other. We are interested in the {\it unbalanced} PSI setting, where (1) the receiver's set is significantly smaller than the sender's, and (2) the receiver (with the smaller set) has a low-power device. Also, in a {\it Labeled PSI} setting, the sender holds a label per each item in its set, and the receiver obtains the labels from the items in the intersection. We build upon the unbalanced PSI protocol of Chen, Laine, and Rindal (CCS~2017) in several ways: we add efficient support for arbitrary length items, we construct and implement an unbalanced Labeled PSI protocol with small communication complexity, and also strengthen the security model using Oblivious Pseudo-Random Function (OPRF) in a pre-processing phase. Our protocols outperform previous ones: for an intersection of $2^{20}$ and $512$ size sets of arbitrary length items our protocol has a total online running time of just $1$~second (single thread), and a total communication cost of $4$~MB. For a larger example, an intersection of $2^{28}$ and $1024$ size sets of arbitrary length items has an online running time of $12$ seconds (multi-threaded), with less than $18$~MB of total communication.
Zhongxiang Zheng, Guangwu Xu, Chunhuan Zhao
In this paper, we start with a discussion of discrete Gaussian measures on lattices.
Several results of Banaszczyk are analyzed, different approaches are suggested.
In the second part of the paper we prove two new bounds for the smoothing parameter of lattices.
Under the natural assumption that $\varepsilon$ is suitably small, we obtain two estimations of the
smoothing parameter:
1. \[ \eta_{\varepsilon}(\mathbb{Z}) \le \sqrt{\frac{\ln \big(\frac{\varepsilon}{44}+\frac{2}{\varepsilon}\big)}{\pi}}. \]
2. For a lattice ${\cal L}\subset \mathbb{R}^n$ of dimension $n$, \[ \eta_{\varepsilon}({\cal L}) \le \sqrt{\frac{\ln \big(n-1+\frac{2n}{\varepsilon}\big)}{\pi}}\tilde{bl}({\cal L}). \]
1. \[ \eta_{\varepsilon}(\mathbb{Z}) \le \sqrt{\frac{\ln \big(\frac{\varepsilon}{44}+\frac{2}{\varepsilon}\big)}{\pi}}. \]
2. For a lattice ${\cal L}\subset \mathbb{R}^n$ of dimension $n$, \[ \eta_{\varepsilon}({\cal L}) \le \sqrt{\frac{\ln \big(n-1+\frac{2n}{\varepsilon}\big)}{\pi}}\tilde{bl}({\cal L}). \]
Carl Bootland, Wouter Castryck, Ilia Iliashenko, Frederik Vercauteren
We introduce a new homomorphic encryption scheme that is natively capable of computing with complex numbers. This is done by generalizing recent work of Chen, Laine, Player and Xia, who modified the Fan-Vercauteren scheme by replacing the integral plaintext modulus $t$ by a linear polynomial $X - b$. Our generalization studies plaintext moduli of the form $X^m + b$. Our construction significantly reduces the noise growth in comparison to the original FV scheme, so much deeper arithmetic circuits can be homomorphically executed.
ByeongHak Lee, Jooyoung Lee
We propose a new construction of tweakable block ciphers from standard block ciphers. Our construction, dubbed XHX2, is the cascade of two independent XHX block ciphers, so it makes two call to the underlying block cipher using tweak-dependent keys. We prove the security of XHX2 up to min{2^{2(n+m)/3},2^{n+m/2}} queries (ignoring logarithmic factors) in the ideal cipher model, when the block cipher operates on n-bit blocks using m-bit keys. The XHX2 tweakable block cipher is the first construction that achieves beyond-birthday-bound security with respect to the input size of the underlying block cipher in the ideal cipher model.
Yu Long Chen, Bart Mennink, Mridul Nandi
Length doublers are cryptographic functions that transform an n-bit cryptographic primitive into an efficient and secure cipher that length-preservingly encrypts strings of length in [n,2n-1]. All currently known constructions are only proven secure up to the birthday bound, and for all but one construction this bound is known to be tight. We consider the remaining candidate, LDT by Chen et al.(ToSC 2017(3)), and prove that it achieves beyond the birthday bound security for the domain [n,3n/2). We generalize the construction to multiple rounds and demonstrate that by adding one more encryption layer to LDT, beyond the birthday bound security can be achieved for all strings of length in [n,2n-1]: security up to around 2^{2n/3} for the encryption of strings close to n and security up to around 2^{n} for strings of length close to 2n. The security analysis of both schemes is performed in a modular manner through the introduction and analysis of a new concept called ``harmonic permutation primitives.''
Michael Meyer, Steffen Reith
Recently Castryck, Lange, Martindale, Panny, and Renes published CSIDH, a new key exchange scheme using supersingular elliptic curve isogenies. Due to its small key sizes, and the possibility of a non-interactive and a static-static key exchange, CSIDH seems very interesting for practical applications. However, the performance is rather slow. Therefore, we employ some techniques to speed up the algorithms, mainly by restructuring the elliptic curve point multiplications and by using twisted Edwards curves in the isogeny image curve computations, yielding a speed-up factor of 1.33 in comparison to the implementation of Castryck et al. Furthermore, we suggest techniques for constant-time implementations.
Yu Chen, Yuyu Wang, Hong-sheng Zhou
In this work, we develop a framework for building leakage-resilient cryptosystems in the bounded leakage model from puncturable primitives and indistinguishability obfuscation ($\iO$). Our main results are as follows:
We build leakage-resilient weak pseudorandom functions (PRFs) from weak puncturable PRFs and $\iO$, which readily imply leakage-resilient secret-key encryption.
We build leakage-resilient publicly evaluable pseudorandom functions (PEPRFs) from puncturable PEPRFs and $\iO$, which readily imply leakage-resilient key encapsulation mechanism and thus public-key encryption. As a building block of independent interest, we realize puncturable PEPRFs from either new puncturable objects including puncturable trapdoor functions and puncturable extractable hash proof systems or existing puncturable PRFs with $\iO$.
We construct the first leakage-resilient public-coin signature from selective puncturable PRFs, leakage-resilient one-way functions and $\iO$. This settles the open problem posed by Boyle, Segev and Wichs (Eurocrypt 2011).
By further assuming the existence of lossy functions, all the above constructions achieve optimal leakage rate of $1-o(1)$. Such a leakage rate is not known to be achievable for weak PRFs, PEPRFs and public-coin signatures before.
We build leakage-resilient weak pseudorandom functions (PRFs) from weak puncturable PRFs and $\iO$, which readily imply leakage-resilient secret-key encryption.
We build leakage-resilient publicly evaluable pseudorandom functions (PEPRFs) from puncturable PEPRFs and $\iO$, which readily imply leakage-resilient key encapsulation mechanism and thus public-key encryption. As a building block of independent interest, we realize puncturable PEPRFs from either new puncturable objects including puncturable trapdoor functions and puncturable extractable hash proof systems or existing puncturable PRFs with $\iO$.
We construct the first leakage-resilient public-coin signature from selective puncturable PRFs, leakage-resilient one-way functions and $\iO$. This settles the open problem posed by Boyle, Segev and Wichs (Eurocrypt 2011).
By further assuming the existence of lossy functions, all the above constructions achieve optimal leakage rate of $1-o(1)$. Such a leakage rate is not known to be achievable for weak PRFs, PEPRFs and public-coin signatures before.
Rajani Singh, Ashutosh Dhar Dwivedi, Gautam Srivastava
Bitcoin is a decentralized cryptocurrency payment system, working without a single administrator or a third party bank. A bitcoin is created by miners, using complex mathematical "proof of work" procedure by computing hashes. For each successful attempt, miners get rewards in terms of bitcoin and transaction fees. Miners participate in mining to get this reward as income. Mining of cryptocurrency such as bitcoin becomes a common interest among the miners as the bitcoin market value is very high. Bitcoin is a non-renewable resource, since the reward of mining a bitcoin decreases over time, obvious questions that arise are what will be the incentive for miners in bitcoin mining over time? Moreover, how will balance be maintained in the bitcoin mining market as time goes on ? From the fact that at any time only one miner will be rewarded (the one who will win the mining game by first creating and updating the blocks and the remaining miners effort will be wasted at that time), it is better for them to mine strategically. However, this strategy could be a plan of action designed to achieve a long-term goal, either Cooperative--- where miners can benefit by cooperating and binding agreements or Non-Cooperative-- where miners do not make binding agreements and compete against each other.
In this paper we create a game theoretic model where we consider bitcoin mining as a continuous time dynamic game which is played an infinite number of times. We propose two different types of game theory solutions: Social optimum: (Cooperative) when the miners altogether maximize their total profit and Nash equilibrium: (Non-Cooperative) when each miner behaves selfishly and individually wants to maximize his/her total profit. Note that in our game theory model, a player represents a single "miner" or a single "mining pool" who is responsible to create a block in the blockchain. Our work here found that the bitcoin is never sustainable and depleted very fast for the Nash equilibrium even if it is sustainable for the Social optimum. Our result is quite intuitive to the common belief that mining in cooperation will give the higher payoff or profit to each miner than mining individually. Finally, to retain the bitcoin market at equilibrium we also propose a linear tax system which is of Pigovian type in order to enforce social optimality in our bitcoin dynamic game model.
Rafael del Pino, Vadim Lyubashevsky, Gregor Seiler
We present a group signature scheme, based on the hardness of lattice problems, whose outputs are more than an order of magnitude smaller than the currently most efficient schemes in the literature. Since lattice-based schemes are also usually non-trivial to efficiently implement, we additionally provide the first experimental implementation of lattice-based group signatures demonstrating that our construction is indeed practical -- all operations take less than half a second on a standard laptop.
A key component of our construction is a new zero-knowledge proof system for proving that a committed value belongs to a particular set of small size. The sets for which our proofs are applicable are exactly those that contain elements that remain stable under Galois automorphisms of the underlying cyclotomic number field of our lattice-based protocol. We believe that these proofs will find applications in other settings as well.
The motivation of the new zero-knowledge proof in our construction is to allow the efficient use of the selectively-secure signature scheme (i.e. a signature scheme in which the adversary declares the forgery message before seeing the public key) of Agrawal et al. (Eurocrypt 2010) in constructions of lattice-based group signatures and other privacy protocols. For selectively-secure schemes to be meaningfully converted to standard signature schemes, it is crucial that the size of the message space is not too large. Using our zero-knowledge proofs, we can strategically pick small sets for which we can provide efficient zero-knowledge proofs of membership.
A key component of our construction is a new zero-knowledge proof system for proving that a committed value belongs to a particular set of small size. The sets for which our proofs are applicable are exactly those that contain elements that remain stable under Galois automorphisms of the underlying cyclotomic number field of our lattice-based protocol. We believe that these proofs will find applications in other settings as well.
The motivation of the new zero-knowledge proof in our construction is to allow the efficient use of the selectively-secure signature scheme (i.e. a signature scheme in which the adversary declares the forgery message before seeing the public key) of Agrawal et al. (Eurocrypt 2010) in constructions of lattice-based group signatures and other privacy protocols. For selectively-secure schemes to be meaningfully converted to standard signature schemes, it is crucial that the size of the message space is not too large. Using our zero-knowledge proofs, we can strategically pick small sets for which we can provide efficient zero-knowledge proofs of membership.
Katerina Sotiraki, Manolis Zampetakis, Giorgos Zirdelis
Polynomial Pigeonhole Principle (PPP) is an important subclass of TFNP with profound connections to the complexity of the fundamental cryptographic primitives: collision-resistant hash functions and one-way permutations. In contrast to most of the other subclasses of TFNP, no complete problem is known for PPP. Our work identifies the first PPP-complete problem without any circuit or Turing Machine given explicitly in the input, and thus we answer a longstanding open question from [Papadimitriou1994]. Specifically, we show that constrained-SIS (cSIS), a generalized version of the well-known Short Integer Solution problem (SIS) from lattice-based cryptography, is PPP-complete.
In order to give intuition behind our reduction for constrained-SIS, we identify another PPP-complete problem with a circuit in the input but closely related to lattice problems. We call this problem BLICHFELDT and it is the computational problem associated with Blichfeldt's fundamental theorem in the theory of lattices.
Building on the inherent connection of PPP with collision-resistant hash functions, we use our completeness result to construct the first natural hash function family that captures the hardness of all collision-resistant hash functions in a worst-case sense, i.e. it is natural and universal in the worst-case. The close resemblance of our hash function family with SIS, leads us to the first candidate collision-resistant hash function that is both natural and universal in an average-case sense.
Finally, our results enrich our understanding of the connections between PPP, lattice problems and other concrete cryptographic assumptions, such as the discrete logarithm problem over general groups.
In order to give intuition behind our reduction for constrained-SIS, we identify another PPP-complete problem with a circuit in the input but closely related to lattice problems. We call this problem BLICHFELDT and it is the computational problem associated with Blichfeldt's fundamental theorem in the theory of lattices.
Building on the inherent connection of PPP with collision-resistant hash functions, we use our completeness result to construct the first natural hash function family that captures the hardness of all collision-resistant hash functions in a worst-case sense, i.e. it is natural and universal in the worst-case. The close resemblance of our hash function family with SIS, leads us to the first candidate collision-resistant hash function that is both natural and universal in an average-case sense.
Finally, our results enrich our understanding of the connections between PPP, lattice problems and other concrete cryptographic assumptions, such as the discrete logarithm problem over general groups.