## IACR News

Here you can see all recent updates to the IACR webpage. These updates are also available:

There is currently a problem with the jobs channel, and new jobs listings are not appearing here. Please see the jobs page.

#### 23 August 2019

###### Christian Mouchet, Juan Troncoso-Pastoriza, Jean-Pierre Hubaux

ePrint Report
In this work, we advance the conceptual and technical aspects of Secure Multiparty Computation (SMC). We approach SMC as a computational problem and propose a novel formulation of this problem in terms of trust boundaries. From this formulation, we derive a general framework that enables a more comprehensive characterization of both the SMC problem and its solutions. Existing SMC solutions are commonly seen as diametrically different and incompatible, but we show how they can be mapped to particular instances of our framework, hence enabling their analysis under a common and unified basis. In this framework, the core component of an SMC solution is a distributed homomorphic cryptosystem. We show that the features this cryptosystem provides determine the need for interaction and overall efficiency of the corresponding SMC solutions. Based on this analysis, we introduce a practical instantiation of our framework by proposing a distributed version of the Brakerski-Fan-Vercauteren (BFV) lattice-based homomorphic cryptosystem. We analyze the security, noise overhead, and computational costs of this scheme. Due to its conceptual simplicity and efficiency, our solution has great potential for addressing highly relevant scenarios, such as secure data-sharing and machine-learning. Hence, this work constitutes a step forward in secure computation, by enabling computation across trust boundaries.

###### Subhabrata Samajder, Palash Sarkar

ePrint Report
In the context of linear cryptanalysis of block ciphers, let $p_0$ (resp. $p_1$) be the probability that a particular linear approximation
holds for the right (resp. a wrong) key choice. The standard right key randomisation hypothesis states that $p_0$
is a constant $p\neq 1/2$ and the standard wrong key randomisation hypothesis states that $p_1=1/2$. Using these hypotheses,
the success probability $P_S$ of the attack can be expressed in terms of the data complexity $N$. The resulting expression for $P_S$
is a monotone increasing function of $N$.

Building on earlier work by Daemen and Rijmen (2007), Bogdanov and Tischhauser (2014) argued that $p_1$ should be considered to be a random variable. They postulated the adjusted wrong key randomisation hypothesis which states that $p_1$ follows a normal distribution. A non-intuitive consequence was that the resulting expression for $P_S$ is no longer a monotone increasing function of $N$. A later work by Blondeau and Nyberg (2017) argued that $p_0$ should also be considered to be a random variable and they postulated the adjusted right key randomisation hypothesis which states that $p_0$ follows a normal distribution.

In this work, we revisit the key randomisation hypotheses. While the argument that $p_0$ and $p_1$ should be considered to be random variables is indeed valid, we consider the modelling of their distributions by normal to be inappropriate. Being probabilities, the support of the distributions of $p_0$ and $p_1$ should be subsets of $[0,1]$ which does not hold for normal distributions. We show that if $p_0$ and $p_1$ follow any distributions with supports which are subsets of $[0,1]$, and $E[p_0]=p$ and $E[p_1]=1/2$, then the expression for $P_S$ that is obtained is exactly the same as the one obtained using the standard key randomisation hypotheses. Consequently, $P_S$ is a monotone increasing function of $N$ even when $p_0$ and $p_1$ are considered to be random variables.

Building on earlier work by Daemen and Rijmen (2007), Bogdanov and Tischhauser (2014) argued that $p_1$ should be considered to be a random variable. They postulated the adjusted wrong key randomisation hypothesis which states that $p_1$ follows a normal distribution. A non-intuitive consequence was that the resulting expression for $P_S$ is no longer a monotone increasing function of $N$. A later work by Blondeau and Nyberg (2017) argued that $p_0$ should also be considered to be a random variable and they postulated the adjusted right key randomisation hypothesis which states that $p_0$ follows a normal distribution.

In this work, we revisit the key randomisation hypotheses. While the argument that $p_0$ and $p_1$ should be considered to be random variables is indeed valid, we consider the modelling of their distributions by normal to be inappropriate. Being probabilities, the support of the distributions of $p_0$ and $p_1$ should be subsets of $[0,1]$ which does not hold for normal distributions. We show that if $p_0$ and $p_1$ follow any distributions with supports which are subsets of $[0,1]$, and $E[p_0]=p$ and $E[p_1]=1/2$, then the expression for $P_S$ that is obtained is exactly the same as the one obtained using the standard key randomisation hypotheses. Consequently, $P_S$ is a monotone increasing function of $N$ even when $p_0$ and $p_1$ are considered to be random variables.

###### Seungkwang Lee, Myungchul Kim

ePrint Report
Differential Fault Analysis (DFA) intentionally injects some fault into the encryption process and analyzes a secret key from the mathematical relationship between faulty and fault-free ciphertexts. Even white-box cryptographic implementations are still vulnerable to DFA. A common way to defend DFA is to use some type of redundancy such as time or hardware redundancy. However, previous work on software-based redundancy method can be easily bypassed by white-box attackers, who can access and even modify all resources. In this paper, we propose a secure software redundancy named table redundancy that exploits the characteristic of table diversity in white-box cryptography. We show how to apply this table redundancy technique to a white-box AES implementation with a 128-bit key. To prevent significant degradation of performance, the lookup tables which are not under DFA are shared and table redundancy are applied to the inner rounds under DFA. The outputs of the redundant computations are the SubBytes output multiplied by the MixColumns matrix in the 9-th round and encoded by different transformations. The XOR operation combines those redundant intermediate values and the combined transformation is canceled out in the following shared part of the encryption. Our security analysis shows that a success probability of DFA on our table redundancy is negligible and a brute-force attack becomes too costly. With three redundant computations, the total table size and the number of lookups are less than double compared to a non-protected implementation.

###### Gabrielle Beck, Maximilian Zinkus, Matthew Green

ePrint Report
In this work we investigate the problem of automating the development of adaptive chosen
ciphertext attacks on systems that contain vulnerable format oracles. Unlike previous attempts,
which simply automate the execution of known attacks, we consider a more challenging problem:
to programmatically derive a novel attack strategy, given only a machine-readable description of
the plaintext verification function and the malleability characteristics of the encryption scheme.
We present a new set of algorithms that use SAT and SMT solvers to reason deeply over the
design of the system, producing an automated attack strategy that can entirely decrypt protected
messages. Developing our algorithms required us to adapt techniques from a diverse range of
research fields, as well as to explore and develop new ones. We implement our algorithms using
existing theory solvers. The result is a practical tool called Delphinium that succeeds against
real-world and contrived format oracles. To our knowledge, this is the first work to automatically
derive such complex chosen ciphertext attacks.

###### Nigel P. Smart, Titouan Tanguy

ePrint Report
We propose a mechanism for an m-party dishonest majority Multi-Party Computation (MPC) protocol to obtain the required
pre-processing data (called Beaver Triples), from a subset of a set of cloud service providers; providing a form of TaaS (Triples-as-a-Service). The service providers used by the MPC computing parties can be selected dynamically at the point of the MPC computation being run, and the interaction between the MPC parties and the TaaS parties is via a single round of ommunication, logged on a public ledger. The TaaS is itself instantiated as an MPC protocol which produces the triples for a different access structure. Thus our protocol also acts as a translation mechanism between the secret sharing used by one MPC protocol and the other.

#### 22 August 2019

###### Brno, Czech Republic, 30 January - 3 April 2020

Event Calendar
Event date: 30 January to 3 April 2020

Submission deadline: 15 September 2019

Notification: 10 November 2019

Submission deadline: 15 September 2019

Notification: 10 November 2019

#### 21 August 2019

###### Diego F. Aranha, Claudio Orlandi, Akira Takahashi, Greg Zaverucha

ePrint Report
Deterministic generation of per-signature randomness has been a widely accepted solution to mitigate the catastrophic risk of randomness failure in Fiat-Shamir type signature schemes. However, recent studies have practically demonstrated that such de-randomized schemes, including EdDSA, are vulnerable to differential fault attacks, which enable fault adversaries to recover the entire secret signing key, by artificially provoking randomness reuse or corrupting computation in other ways. In order to balance concerns of both randomness failures and the threat of fault injection, some signature designs are advocating a "hedged" derivation of the per-signature randomness, by hashing the secret key, message, and a nonce. Despite the growing popularity of the hedged paradigm in practical signature schemes, to the best of our knowledge, there has been no attempt to formally analyze the fault resilience of hedged signatures in the literature.

We perform a formal security analysis of the fault resilience of signature schemes constructed via the Fiat-Shamir transform. We propose a model to characterize bit-tampering fault attacks against hedged Fiat-Shamir type signatures, and investigate their impact across different steps of the signing operation. We prove that for some types of faults, attacks are mitigated by the hedged paradigm, while attacks remain possible for others. As a concrete case study, we then apply our results to Picnic2, a recent Fiat-Shamir type signature scheme using the hedged construction.

We perform a formal security analysis of the fault resilience of signature schemes constructed via the Fiat-Shamir transform. We propose a model to characterize bit-tampering fault attacks against hedged Fiat-Shamir type signatures, and investigate their impact across different steps of the signing operation. We prove that for some types of faults, attacks are mitigated by the hedged paradigm, while attacks remain possible for others. As a concrete case study, we then apply our results to Picnic2, a recent Fiat-Shamir type signature scheme using the hedged construction.

###### Antonio Faonio, Dario Fiore, Javier Herranz, Carla Ràfols

ePrint Report
Re-randomizable RCCA-secure public key encryption (Rand-RCCA PKE) schemes reconcile
the property of re-randomizability of the ciphertexts
with the need of security against chosen-ciphertexts attacks.

In this paper we give a new construction of a Rand-RCCA PKE scheme that is perfectly re-randomizable. Our construction is structure-preserving, can be instantiated over Type-3 pairing groups, and achieves better computation and communication efficiency than the state of the art perfectly re-randomizable schemes (e.g., Prabhakaran and Rosulek, CRYPTO'07).

Next, we revive the Rand-RCCA notion showing new applications where our Rand-RCCA PKE scheme plays a fundamental part: (1) We show how to turn our scheme into a publicly-verifiable Rand-RCCA scheme; (2) We construct a malleable NIZK with a (variant of) simulation soundness that allows for re-randomizability; (3) We propose a new UC-secure Verifiable Mix-Net protocol that is secure in the common reference string model. Thanks to the structure-preserving property, all these applications are efficient. Notably, our Mix-Net protocol is the most efficient universally verifiable Mix-Net (without random oracle) where the CRS is an uniformly random string of size independent of the number of senders. The property is of the essence when such protocols are used in large scale.

In this paper we give a new construction of a Rand-RCCA PKE scheme that is perfectly re-randomizable. Our construction is structure-preserving, can be instantiated over Type-3 pairing groups, and achieves better computation and communication efficiency than the state of the art perfectly re-randomizable schemes (e.g., Prabhakaran and Rosulek, CRYPTO'07).

Next, we revive the Rand-RCCA notion showing new applications where our Rand-RCCA PKE scheme plays a fundamental part: (1) We show how to turn our scheme into a publicly-verifiable Rand-RCCA scheme; (2) We construct a malleable NIZK with a (variant of) simulation soundness that allows for re-randomizability; (3) We propose a new UC-secure Verifiable Mix-Net protocol that is secure in the common reference string model. Thanks to the structure-preserving property, all these applications are efficient. Notably, our Mix-Net protocol is the most efficient universally verifiable Mix-Net (without random oracle) where the CRS is an uniformly random string of size independent of the number of senders. The property is of the essence when such protocols are used in large scale.

###### Mohsen Jahanbani, Zeinolabedin Norouzi, Nasour Bagheri

ePrint Report
Authenticated encryption schemes provide both confidentiality and integrity services, simultaneously. Correlation power analysis (CPA) can be a thread for authenticated ciphers, like all physical implementations of any cryptographic system. In this paper, for the first time, a three-steps CPA attack against COLM, one of the winners of CAESAR, is presented to indicate its vulnerability. For this purpose, in this research paper, this authenticated encryption scheme is implemented on the FPGA of the SAKURA-G board and, by measuring and collecting 1,800 power traces, a successful CPA attack with zero value power model has been mounted on it. In addition, a protected hardware architecture for the COLM is proposed to make this design secure against first-order CPA attacks. To this end, a domain-oriented masking (DOM) scheme with two inputs/outputs share is used to protect the COLM. To verify the security of these countermeasures, we mounted a first and second-order CPA attack and a non-specified t-test on the protected COLM.

###### Ariel Gabizon, Zachary J. Williamson, Oana Ciobotaru

ePrint Report
zk-SNARK constructions that utilize an updatable universal structured reference string remove one of the main obstacles in deploying zk-SNARKs [GKMMM, Crypto 2018]. The important work of Maller et al. [MBKM, CCS 2019] presented $\mathsf{Sonic}$ - the first potentially practical zk-SNARK with fully succinct verification for general arithmetic circuits with such an SRS.
However, the version of $\mathsf{Sonic}$ enabling fully succinct verification still requires relatively high proof construction overheads. We present a universal SNARK construction with fully succinct verification, and significantly lower prover running time (roughly 7.5-20 less group exponentiations than [MBKM] in the fully succinct verifier mode depending on circuit structure).

Similarly to [MBKM],[Bootle et. al, Eurocrypt 2016] we rely on a permutation argument based on Bayer and Groth [Eurocrypt 2012]. However, we focus on ``Evaluations on a subgroup rather than coefficients of monomials''; which enables simplifying both the permutation argument and the artihmetization step.

Similarly to [MBKM],[Bootle et. al, Eurocrypt 2016] we rely on a permutation argument based on Bayer and Groth [Eurocrypt 2012]. However, we focus on ``Evaluations on a subgroup rather than coefficients of monomials''; which enables simplifying both the permutation argument and the artihmetization step.

###### Vincenzo Iovino, Ivan Visconti

ePrint Report
The Fiat-Shamir (FS) transform is a well known and widely used technique to convert any constant-round public-coin honest-verifier zero-knowledge (HVZK) proof or argument system $CIPC=(Prov,Ver)$ in a non-interactive zero-knowledge (NIZK) argument system $NIZK=(NIZK.Prove, NIZK.Verify)$.
The FS transform is secure in the random oracle (RO) model and is extremely efficient: it adds an evaluation of the
RO for every message played by $Ver$.

While a major effort has been done to attack the soundness of the transform when the RO is instantiated with a ``secure'' hash function, here we focus on a different limitation of the FS transform that exists even when there is a secure instantiation of the random oracle: the soundness of $NIZK$ holds against polynomial-time adversarial provers only. Therefore even when $CIPC$ is a proof system, $NIZK$ is only an argument system.

In this paper we propose a new transform from 3-round public-coin HVZK proof systems for several practical relations to NIZK proof systems in the RO model. Our transform outperforms the FS transform protecting the honest verifier from unbounded adversarial provers with no restriction on the number of RO queries. The protocols our transform can be applied to are the ones for proving membership to the range of a one-way group homomorphism as defined by [Maurer - Design, Codes and Cryptography 2015] except that we additionally require the function to be endowed with a trapdoor and other natural properties. For instance, we obtain new efficient instantiations of NIZK proofs for relations related to quadratic residuosity and the RSA function.

As a byproduct, with our transform we obtain essentially for free the first efficient non-interactive zap (i.e., 1-round non-interactive witness indistinguishable proof system) for several practical languages in the non-programmable RO model and in an ideal-PUF model.

Our approach to NIZK proofs can be seen as an abstraction of the celebrated work of [Feige, Lapidot and Shamir - FOCS 1990].

While a major effort has been done to attack the soundness of the transform when the RO is instantiated with a ``secure'' hash function, here we focus on a different limitation of the FS transform that exists even when there is a secure instantiation of the random oracle: the soundness of $NIZK$ holds against polynomial-time adversarial provers only. Therefore even when $CIPC$ is a proof system, $NIZK$ is only an argument system.

In this paper we propose a new transform from 3-round public-coin HVZK proof systems for several practical relations to NIZK proof systems in the RO model. Our transform outperforms the FS transform protecting the honest verifier from unbounded adversarial provers with no restriction on the number of RO queries. The protocols our transform can be applied to are the ones for proving membership to the range of a one-way group homomorphism as defined by [Maurer - Design, Codes and Cryptography 2015] except that we additionally require the function to be endowed with a trapdoor and other natural properties. For instance, we obtain new efficient instantiations of NIZK proofs for relations related to quadratic residuosity and the RSA function.

As a byproduct, with our transform we obtain essentially for free the first efficient non-interactive zap (i.e., 1-round non-interactive witness indistinguishable proof system) for several practical languages in the non-programmable RO model and in an ideal-PUF model.

Our approach to NIZK proofs can be seen as an abstraction of the celebrated work of [Feige, Lapidot and Shamir - FOCS 1990].

###### Xavier Bonnetain

ePrint Report
MiMC and GMiMC are families of MPC-friendly block ciphers and hash functions. In this note, we show that the block ciphers MiMC-$2n/n$ (or Feistel-MiMC) and univariate GMiMC are vulnerable to an attack which allows
a key recovery in $2^{n/2}$ operations. This attack, which is reminiscent of a slide attack, only relies on their weak key schedules, and is independent of the round function ($x^3$ here) and the number of rounds.

###### Simon-Philipp Merz, Romy Minko, Christophe Petit

ePrint Report
The security proofs for isogeny-based undeniable signature schemes have been based primarily on two isogeny hardness assumptions: that the One-Sided Modified SSCDH problem and the One-More SSCDH problem are hard to solve. We challenge the validity of these assumptions, showing that both the decisional and computational variants of these problems can be solved in constant time. We further demonstrate an attack, applicable to two undeniable signature schemes, one of which was proposed at PQCrypto 2014, which allows an adversary to forge signatures in $2^{4\lambda/5}$ steps on a classical computer. This is an improvement over the expected classical security of $2^{\lambda}$, where $\lambda$ is the chosen security parameter.

###### Yongha Son

ePrint Report
We examine the current parameter choice of Round5,
and rectify its consideration of the improved dual attack due to Albrecht [Albrecht-EC17]:
there is one significant optimization of Albrecht's dual attack,
which was not reflected to Round5 parameter choices.
By taking this into consideration,
some parameter choices of Round5 cannot enjoy the claimed security level.

#### 20 August 2019

###### Prasanna Ravi, Sujoy Sinha Roy, Anupam Chattopadhyay, Shivam Bhasin

ePrint Report
In this article, we demonstrate practical side-channel assisted chosen-ciphertext attacks (CCA) over multiple CCA-secure lattice-based public-key encryption schemes (PKE) and key-encapsulation mechanisms (KEM). Most lattice-based PKE/KEMs suffer from the problem of decryption failures and some of these schemes use forward error correction codes to reduce the failure probability. These error correcting codes, when used within public-key cryptographic schemes, involve computations with secret components and hence might leak sensitive side-channel information. In this work, we identify a side-channel vulnerability in constant-time error correcting codes, which help the attacker distinguish between faulty and valid codewords through the EM/power side-channel information. We exploit the vulnerability to demonstrate a practical chosen-ciphertext attacks on the CCA-secure Round5 algorithm which uses timing attack resistant error correcting code.

We further identify a generic side-channel vulnerability within the CCA transformation steps used in multiple CCA-secure lattice-based PKE/KEM schemes. Exploiting the vulnerability, we demonstrate a practical chosen-ciphertext attack which can be performed on multiple CCA-secure lattice-based PKE/KEM schemes.

We perform experimental validation of our attacks using Electromagnetic measurements observed over optimized implementations of multiple NIST candidates taken from the pqm4 library, a benchmarking framework for post quantum cryptographic implementations for the ARM Cortex-M4 microcontroller. We thus establish that (1) lattice-based schemes that use error correcting codes, no matter constant-time or not, are vulnerable to power/EM side-channel attacks and (2) the notion that CCA-secure schemes are as insecure as their CPA-secure versions unless suitably masked against side-channel attacks.

We further identify a generic side-channel vulnerability within the CCA transformation steps used in multiple CCA-secure lattice-based PKE/KEM schemes. Exploiting the vulnerability, we demonstrate a practical chosen-ciphertext attack which can be performed on multiple CCA-secure lattice-based PKE/KEM schemes.

We perform experimental validation of our attacks using Electromagnetic measurements observed over optimized implementations of multiple NIST candidates taken from the pqm4 library, a benchmarking framework for post quantum cryptographic implementations for the ARM Cortex-M4 microcontroller. We thus establish that (1) lattice-based schemes that use error correcting codes, no matter constant-time or not, are vulnerable to power/EM side-channel attacks and (2) the notion that CCA-secure schemes are as insecure as their CPA-secure versions unless suitably masked against side-channel attacks.

###### Fabian Boemer, Anamaria Costache, Rosario Cammarota, Casimir Wierzynski

ePrint Report
In previous work, Boemer et al. introduced nGraph-HE, an extension to the Intel nGraph deep learning (DL) compiler, that en- ables data scientists to deploy models with popular frameworks such as TensorFlow and PyTorch with minimal code changes. However, the class of supported models was limited to relatively shallow networks with polynomial activations. Here, we introduce nGraph-HE2, which extends nGraph-HE to enable privacy-preserving inference on standard, pre-trained models using their native activation functions and number fields (typically real numbers). The proposed framework leverages the CKKS scheme, whose support for real numbers is friendly to data science, and a client-aided model to compute activation functions.

We first present CKKS-specific optimizations, enabling a 3x-88x runtime speedup for scalar encoding, and doubling the throughput through a novel use of CKKS plaintext packing into complex numbers. Second, we optimize ciphertext-plaintext addition and multiplication, yielding 2.6x- 4.2x runtime speedup. Third, we present two graph-level optimizations: lazy rescaling and depth-aware encoding.

Together, these optimizations enable state-of-the-art throughput of 1,998 images/s on the CryptoNets network. We also present homomorphic evaluation of (to our knowledge) the largest network to date, namely, pre-trained MobileNetV2 models on the ImageNet dataset, with 60.4%/82.7% top-1/top-5 accuracy and an amortized runtime of 381 ms/image.

We first present CKKS-specific optimizations, enabling a 3x-88x runtime speedup for scalar encoding, and doubling the throughput through a novel use of CKKS plaintext packing into complex numbers. Second, we optimize ciphertext-plaintext addition and multiplication, yielding 2.6x- 4.2x runtime speedup. Third, we present two graph-level optimizations: lazy rescaling and depth-aware encoding.

Together, these optimizations enable state-of-the-art throughput of 1,998 images/s on the CryptoNets network. We also present homomorphic evaluation of (to our knowledge) the largest network to date, namely, pre-trained MobileNetV2 models on the ImageNet dataset, with 60.4%/82.7% top-1/top-5 accuracy and an amortized runtime of 381 ms/image.

#### 19 August 2019

###### M Sazadur Rahman, Adib Nahiyan, Sarah Amir, Fahim Rahman, Farimah Farahmandi, Domenic Forte, Mark Tehranipoor

ePrint Report
Logic locking has emerged as a promising solution against IP piracy and modification by untrusted entities in the integrated circuit design process. However, its security is challenged by boolean satisfiability (SAT) based attacks. Criteria that are critical to SAT attack success on obfuscated circuits includes scan architecture access to the attacker and/or that the circuit under attack is combinational. To address this issue, we propose a dynamically-obfuscated scan chain (DOSC) technique in resisting SAT attack in an obfuscated sequential design by restricting scan access only to authorized users.

###### Navid Ghaedi Bardeh

ePrint Report
In this paper, we study the results of the recently proposed exchange attack in an adaptive setting. As expected, it leads to present a better 6-round key-independent distinguisher in terms of data and computational complexities. More specifically, our 6-round adaptive distinguisher requires $2^{83}$ chosen plaintexts and $2^{83}$ adaptively chosen ciphertexts and has a computational cost of $2^{83}$ encryption.

###### Max Hoffmann, Michael Klooß, Andy Rupp

ePrint Report
Zero-knowledge arguments have become practical, and widely used,
especially in the world of Blockchain, for example in Zcash.

This work revisits zero-knowledge proofs in the discrete logarithm setting. First, we identify and carve out basic techniques (partly being used implicitly before) to optimize proofs in this setting. In particular, the linear combination of protocols is a useful tool to obtain zero-knowledge and/or reduce communication. With these techniques, we are able to devise zero-knowledge variants of the logarithmic communication arguments by Bootle et al.\ (EUROCRYPT '16) and Bünz et al. (S\&P '18) thereby introducing almost no overhead. We then construct a conceptually simple commit-and-prove argument for satisfiability of a set of quadratic equations. Unlike previous work, we are not restricted to rank 1 constraint systems (R1CS). This is, to the best of our knowledge, the first work demonstrating that general quadratic constraints, not just R1CS, are a natural relation in the dlog (or ideal linear commitment) setting. This enables new possibilities for optimisation, as, eg., any degree $n^2$ polynomial $f(X)$ can now be ``evaluated'' with at most $2n$ quadratic constraints.

Our protocols are modular. We easily construct an efficient, logarithmic size shuffle proof, which can be used in electronic voting.

Additionally, we take a closer look at quantitative security measures, eg. the efficiency of an extractor. We formalise short-circuit extraction, which allows us to give tighter bounds on the efficiency of an extractor.

This work revisits zero-knowledge proofs in the discrete logarithm setting. First, we identify and carve out basic techniques (partly being used implicitly before) to optimize proofs in this setting. In particular, the linear combination of protocols is a useful tool to obtain zero-knowledge and/or reduce communication. With these techniques, we are able to devise zero-knowledge variants of the logarithmic communication arguments by Bootle et al.\ (EUROCRYPT '16) and Bünz et al. (S\&P '18) thereby introducing almost no overhead. We then construct a conceptually simple commit-and-prove argument for satisfiability of a set of quadratic equations. Unlike previous work, we are not restricted to rank 1 constraint systems (R1CS). This is, to the best of our knowledge, the first work demonstrating that general quadratic constraints, not just R1CS, are a natural relation in the dlog (or ideal linear commitment) setting. This enables new possibilities for optimisation, as, eg., any degree $n^2$ polynomial $f(X)$ can now be ``evaluated'' with at most $2n$ quadratic constraints.

Our protocols are modular. We easily construct an efficient, logarithmic size shuffle proof, which can be used in electronic voting.

Additionally, we take a closer look at quantitative security measures, eg. the efficiency of an extractor. We formalise short-circuit extraction, which allows us to give tighter bounds on the efficiency of an extractor.

###### Ling Ren

ePrint Report
This paper gives a simple analysis of Nakamoto consensus.