Here you can see all recent updates to the IACR webpage. These updates are also available:

23
September
2018

ePrint Report
Pre- and post-quantum Diffie--Hellman from groups, actions, and isogenies
Benjamin Smith

Diffie--Hellman key exchange
is at the foundations of public-key cryptography,
but conventional group-based Diffie--Hellman
is vulnerable to Shor's quantum algorithm.
A range of ``post-quantum Diffie--Hellman''
protocols have been proposed
to mitigate this threat,
including the Couveignes, Rostovtsev--Stolbunov,
SIDH, and CSIDH schemes,
all based on the combinatorial and number-theoretic structures
formed by isogenies of elliptic curves.
Pre- and post-quantum Diffie--Hellman schemes
resemble each other at the highest level,
but the further down we dive, the more differences
emerge---differences that are critical when we use Diffie--Hellman
as a basic component in more complicated constructions.
In this survey we compare and contrast pre- and post-quantum
Diffie--Hellman algorithms,
highlighting some important subtleties.

ePrint Report
Remote Inter-Chip Power Analysis Side-Channel Attacks at Board-Level
Falk Schellenberg, Dennis R.E. Gnad, Amir Moradi, Mehdi B. Tahoori

The current practice in board-level integration is to incorporate chips and components from numerous vendors. A fully trusted supply chain for all used components and chipsets is an important, yet extremely difficult to achieve, prerequisite to validate a complete board-level system for safe and secure operation. An increasing risk is that most chips nowadays run software or firmware, typically updated throughout the system lifetime, making it practically impossible to validate the full system at every given point in the manufacturing, integration and operational life cycle. This risk is elevated in devices that run 3rd party firmware. In this paper we show that an FPGA used as a common accelerator in various boards can be reprogrammed by software to introduce a sensor, suitable as a remote power analysis side-channel attack vector at the board-level. We show successful power analysis attacks from one FPGA on the board to another chip implementing RSA and AES cryptographic modules. Since the sensor is only mapped through firmware, this threat is very hard to detect, because data can be exfiltrated without requiring inter-chip communication between victim and attacker. Our results also prove the potential vulnerability in which any untrusted chip on the board can launch such attacks on the remaining system.

ePrint Report
Spread: a new layer for profiled deep-learning side-channel attacks
Christophe Pfeifer, Patrick Haddad

Recent publications, such as [10] and [13], exploit the advantages of deep-learning techniques in performing Side-Channel Attacks.
One example of the Side-Channel community interest for such techniques
is the release of the public ASCAD database, which provides power
consumption traces of a masked 128-bit AES implementation, and is meant
to be a common benchmark to compare deep-learning techniques
performances. In this paper, we propose two ways of improving the effectiveness of such attacks. The first one is as new kind of layer for neural networks, called "Spread" layer, which is efficient at tackling side-channel attacks issues, since it reduces the number of layers required and speeds up the learning phase. Our second proposal is an efficient way to correct the neural network predictions, based on its confusion matrix. We have validated both methods on ASCAD database, and conclude that they reduce the number of traces required to succeed attacks. In this article, we show their effectiveness for first-order and second-order attacks.

Group signature is a useful cryptographic primitive, which makes every group member sign messages on behalf of a group they belong to. Namely group signature allows that group member anonymously signs any message without revealing his/her specific identity. However, group signature may make the signers abuse their signing rights if there are no measures of keeping them from abusing signing rights in the group signature schemes. So, group manager must be able to trace (or reveal) the identity of the signer by the signature when the result of the signature needs to be arbitrated, and some revoked group members must fully lose their capability of signing a message on behalf of the group they belong to. A practical model meeting the requirement is verifier-local revocation, which supports the revocation of group member. In this model, the verifiers receive the group member revocation messages from the trusted authority when the relevant signatures need to be verified.
Although currently many group signature schemes have been proposed, most of them are constructed on pairings. In this paper, we present an efficient group signature scheme without pairings under the
model of verifier-local revocation, which is based on the modified EDL signature (first proposed by D. Chaum et al. in Crypto 92). Compared with other group signature schemes, the proposed scheme does
not employ pairing computation and has the constant signing time and signature size, whose security can be reduced to the computational Diffie-Hellman (CDH) assumption in the random oracle model.
Also, we give a formal security model for group signature and prove that the proposed scheme has the properties of traceability and anonymity.

We would like to compute RSA signatures with the help of a Hardware Security Module (HSM). But what can we do when we want to use a certain public exponent that the HSM does not allow or support? Surprisingly, this scenario comes up in real-world settings such as code-signing of Intel SGX enclaves. Intel SGX enclaves have to be signed in order to execute in release mode, using 3072-bit RSA signature scheme with a particular public exponent. However, we encountered commercial hardware security modules that do not support storing RSA keys corresponding to this exponent.
We ask whether it is possible to overcome such a limitation of an HSM and answer it in the affirmative (under stated assumptions). We show how to convert RSA signatures corresponding to one public exponent, to valid RSA signatures corresponding to another exponent. We define security and show that it is not compromised by the additional public knowledge available to an adversary in this setting.

ePrint Report
On QA-NIZK in the BPK Model
Behzad Abdolmaleki, Helger Lipmaa, Janno Siim, Micha{\l} Zaj\k{a}c

While the CRS model is widely accepted for construction of non-interactive zero knowledge (NIZK) proofs, from the practical viewpoint, a very important question is to minimize the trust needed from the creators of the CRS. Recently, Bellare et al. defined subversion-resistance (security in the case the CRS creator may be malicious) for NIZK. First, we observe that subversion zero knowledge (Sub-ZK) in the CRS model corresponds to no-auxiliary-string non-black-box NIZK (also known as nonuniform NIZK) in the Bare Public Key (BPK) model. Due to well-known impossibility results, this observation provides a simple proof that the use of non-black-box techniques is needed to obtain Sub-ZK. Second, we prove that the most efficient known QA-NIZK for linear subspaces by Kiltz and Wee is nonuniform zero knowledge in the BPK model under two alternative novel knowledge assumptions, both secure in the subversion generic bilinear group model. We prove that (for a different set of parameters) a slightly less efficient variant of Kiltz-Wee is nonuniform zero knowledge in the BPK model under a known knowledge assumption that is also secure in the subversion generic bilinear group model.

ePrint Report
Identity Confidentiality in 5G Mobile Telephony Systems
Haibat Khan, Benjamin Dowling, Keith M. Martin

The 3rd Generation Partnership Project (3GPP) recently proposed a standard for 5G telecommunications, containing an identity protection scheme meant to address the long-outstanding privacy problem of permanent subscriber-identity disclosure. The proposal is essentially two disjoint phases: an identification phase, followed by an establishment of security context between mobile subscribers and their service providers via symmetric-key based authenticated key agreement. Currently, 3GPP proposes to protect the identification phase with a public-key based solution, and while the current proposal is secure against a classical adversary, the same would not be true of a quantum adversary. 5G specifications target very long-term deployment scenarios (well beyond the year 2030), therefore it is imperative that quantum-secure alternatives be part of the current specification. In this paper, we present such an alternative scheme for the problem of private identification protection. Our solution is compatible with the current 5G specifications, depending mostly on cryptographic primitives already specified in 5G, adding minimal performance overhead and requiring minor changes in existing message structures. Finally, we provide a detailed formal security analysis of our solution in a novel security framework.

Secure message transmission and Byzantine agreement have been studied extensively in incomplete networks. However, information theoretically secure multiparty computation (MPC) in incomplete networks is less well understood. In this paper, we characterize the conditions under which a pair of parties can compute oblivious transfer (OT) information theoretically securely against a general adversary structure in an incomplete network of reliable, private channels. We provide characterizations for both semi-honest and malicious models. A consequence of our results is a complete characterization of networks in which a given subset of parties can compute any functionality securely with respect to an adversary structure in the semi-honest case and a partial characterization in the malicious case.

ePrint Report
Enhanced Security of Attribute-Based Signatures
Johannes Bl{\"o}mer, Fabian Eidens, Jakob Juhnke

Despite the recent advances in attribute-based signatures (ABS), no schemes have yet been considered under a strong privacy definition. We enhance the security of ABS by presenting a strengthened simulation-based privacy definition and the first attribute-based signature functionality in the framework of universal composability (UC). Additionally, we show that the UC definition is equivalent to our strengthened experiment-based security definitions.

To achieve this we rely on a general unforgeability and a simulation-based privacy definition that is stronger than standard indistinguishability-based privacy. Further, we show that two extant concrete ABS constructions satisfy this simulation-based privacy definition and are therefore UC secure. The two concrete constructions are the schemes by Sakai et al. (PKC'16) and by Maji et al. (CT-RSA'11). Additionally, we identify the common feature that allows these schemes to meet our privacy definition, giving us further insights into the security requirements of ABS.

To achieve this we rely on a general unforgeability and a simulation-based privacy definition that is stronger than standard indistinguishability-based privacy. Further, we show that two extant concrete ABS constructions satisfy this simulation-based privacy definition and are therefore UC secure. The two concrete constructions are the schemes by Sakai et al. (PKC'16) and by Maji et al. (CT-RSA'11). Additionally, we identify the common feature that allows these schemes to meet our privacy definition, giving us further insights into the security requirements of ABS.

ePrint Report
TACHYON: Fast Signatures from Compact Knapsack
Rouzbeh Behnia, Muslum Ozgur Ozmen, Attila A. Yavuz, Mike Rosulek

We introduce a simple, yet efficient digital signature scheme which offers post-quantum security promise. Our scheme, named $\texttt{TACHYON}$, is based on a novel approach for extending one-time hash-based signatures to (polynomially bounded) many-time signatures, using the additively homomorphic properties of generalized compact knapsack functions. Our design permits $\texttt{TACHYON}$ to achieve several key properties. First, its signing and verification algorithms are the fastest among its current counterparts with a higher level of security. This allows $\texttt{TACHYON}$ to achieve the lowest end-to-end delay among its counterparts, while also making it suitable for resource-limited signers. Second, its private keys can be as small as $\kappa$ bits, where $\kappa$ is the desired security level. Third, unlike most of its lattice-based counterparts, $\texttt{TACHYON}$ does not require any Gaussian sampling during signing, and therefore, is free from side-channel attacks targeting this process. We also explore various speed and storage trade-offs for $\texttt{TACHYON}$, thanks to its highly tunable parameters. Some of these trade-offs can speed up $\texttt{TACHYON}$ signing in exchange for larger keys, thereby permitting $\texttt{TACHYON}$ to further improve its end-to-end delay.

ePrint Report
New Techniques for Efficient Trapdoor Functions and Applications
Sanjam Garg, Romain Gay, Mohammad Hajiabadi

We develop techniques for constructing trapdoor functions (TDFs) with short image size and advanced security properties. Our approach builds on the recent framework of Garg and Hajiabadi [CRYPTO 2018]. As applications of our techniques, we obtain

-- The first construction of lossy TDFs based on the Decisional Diffie-Hellman (DDH) assumption with image size linear in input size, while retaining the lossiness rate of [Peikert-Waters STOC 2008].

-- The first construction of deterministic-encryption schemes for block-source inputs (both for the CPA and CCA cases) based on the Computational Diffie-Hellman (CDH) assumption. Moreover, by applying our efficiency-enhancing techniques, we obtain CDH-based schemes with ciphertext size linear in plaintext size.

Prior to our work, all DDH-based constructions of lossy TDFs had image size quadratic in input size. Moreover, all previous constructions of deterministic encryption based even on the stronger DDH assumption incurred a quadratic gap between the ciphertext and plaintext sizes. At a high level, we break the previous quadratic barriers by introducing novel techniques for encoding input bits via hardcore output bits with the use of erasure-resilient codes. All previous schemes used group elements for encoding input bits, resulting in quadratic blowup.

-- The first construction of lossy TDFs based on the Decisional Diffie-Hellman (DDH) assumption with image size linear in input size, while retaining the lossiness rate of [Peikert-Waters STOC 2008].

-- The first construction of deterministic-encryption schemes for block-source inputs (both for the CPA and CCA cases) based on the Computational Diffie-Hellman (CDH) assumption. Moreover, by applying our efficiency-enhancing techniques, we obtain CDH-based schemes with ciphertext size linear in plaintext size.

Prior to our work, all DDH-based constructions of lossy TDFs had image size quadratic in input size. Moreover, all previous constructions of deterministic encryption based even on the stronger DDH assumption incurred a quadratic gap between the ciphertext and plaintext sizes. At a high level, we break the previous quadratic barriers by introducing novel techniques for encoding input bits via hardcore output bits with the use of erasure-resilient codes. All previous schemes used group elements for encoding input bits, resulting in quadratic blowup.

ePrint Report
Non-profiled Mask Recovery: the impact of Independent Component Analysis
Si Gao, Elisabeth Oswald, Hua Chen, Wei Xi

As one of the most prevalent SCA countermeasures, masking schemes are designed to defeat a broad range of side channel attacks. An attack vector that is suitable for low-order masking schemes is to try and directly determine the mask(s) (for each trace) by utilising the fact that often an attacker has access to several leakage points of the respectively used mask(s). Good examples for implementations of low order masking schemes are the based on table re-computations and also the masking scheme in DPAContest V4.2. We propose a novel approach based on Independent Component Analysis (ICA) to efficiently utilise the information from several leakage points to reconstruct the respective masks (for each trace) and show it is a competitive attack vector in practice.

We present two simple backdoors that can be implemented into Maurer's unified zero-knowledge protocol. Thus, we show that a high level abstraction can replace individual backdoors embedded into protocols for proving knowledge of a discrete logarithm (e.g. the Schnorr and Girault protocols), protocols for proving knowledge of an $e^{th}$-root (e.g. the Fiat-Shamir and Guillou-Quisquater protocols), protocols for proving knowledge of a discrete logarithm representation (e.g. the Okamoto protocol) and protocols for proving knowledge of an $e^{th}$-root representation.

ePrint Report
Higher-Order DCA against Standard Side-Channel Countermeasures
Andrey Bogdanov, Matthieu Rivain, Philip S. Vejre, Junwei Wang

At CHES 2016, Bos $\textit{et al.}$ introduced $\textit{differential computational analysis}$ (DCA) as an attack on white-box software implementations of block ciphers. This attack builds on the same principles as DPA in the classical side-channel context, but uses traces consisting of plain values computed by the implementation during execution. This attack was shown to be able to recover the key of many existing AES white-box implementations.

The $\textit{DCA adversary}$ is $\textit{passive}$, and so does not exploit the full power of the white-box setting, implying that many white-box schemes are insecure even in a weaker setting than the one they were designed for. An important problem is therefore how to develop implementations which are resistant to this attack. A natural approach is to apply standard side-channel countermeasures such as $\textit{masking}$ and $\textit{shuffling}$. In this paper, we study the security brought by this approach against the DCA adversary. We show that under some necessary conditions on the underlying randomness generation, these countermeasures provide resistance to standard (first-order) DCA. Furthermore, we introduce $\textit{higher-order DCA}$, and analyze the security of the countermeasures against this attack. This attack is enhanced by introducing a $\textit{multivariate}$ version based on the maximum likelihood approach. We derive analytic expressions for the complexity of the attacks which are backed up through extensive attack experiments. As a result, we can quantify the security level of a masked and shuffled implementation in the (higher-order) DCA setting. This enables a designer to choose appropriate implementation parameters in order to obtain the desired level of protection against passive DCA attacks.

The $\textit{DCA adversary}$ is $\textit{passive}$, and so does not exploit the full power of the white-box setting, implying that many white-box schemes are insecure even in a weaker setting than the one they were designed for. An important problem is therefore how to develop implementations which are resistant to this attack. A natural approach is to apply standard side-channel countermeasures such as $\textit{masking}$ and $\textit{shuffling}$. In this paper, we study the security brought by this approach against the DCA adversary. We show that under some necessary conditions on the underlying randomness generation, these countermeasures provide resistance to standard (first-order) DCA. Furthermore, we introduce $\textit{higher-order DCA}$, and analyze the security of the countermeasures against this attack. This attack is enhanced by introducing a $\textit{multivariate}$ version based on the maximum likelihood approach. We derive analytic expressions for the complexity of the attacks which are backed up through extensive attack experiments. As a result, we can quantify the security level of a masked and shuffled implementation in the (higher-order) DCA setting. This enables a designer to choose appropriate implementation parameters in order to obtain the desired level of protection against passive DCA attacks.

22
September
2018

ePrint Report
S-Mbank: Secure Mobile Banking Authentication Scheme Using Signcryption, Pair Based Text Authentication, and Contactless Smartcard
Dea Saka Kurnia Putra, Mohamad Ali Sadikin, Susila Windarta

Nowadays, mobile banking becomes a popular tool which consumers can conduct financial transactions such as shopping, monitoring accounts balance, transferring funds and other payments. Consumers dependency on mobile needs, make people take a little bit more interest in mobile banking. The use of the one-time password which is sent to the user mobile phone by short message service (SMS) is a vulnerability which we want to solve with proposing a new scheme called S-Mbank. We replace the authentication using the one-time password with the contactless smart card to prevent attackers to use the unencrypted message which is sent to the user's mobile phone. Moreover, it deals vulnerability of spoofer to send an SMS pretending as a bank's server. The contactless smart card is proposed because of its flexibility and security which easier to bring in our wallet than the common passcode generators. The replacement of SMS-based authentication with contactless smart card removes the vulnerability of unauthorized users to act as a legitimate user to exploit the mobile banking user's account. Besides that, we use public-private key pair and PIN to provide two factors authentication and mutual authentication. We use signcryption scheme to provide the efficiency of the computation. Pair based text authentication is also proposed for the login process as a solution to shoulder-surfing attack. We use Scyther tool to analyze the security of authentication protocol in S-Mbank scheme. From the proposed scheme, we are able to provide more security protection for mobile banking service.

ePrint Report
Poly-Logarithmic Side Channel Rank Estimation via Exponential Sampling
Liron David, Avishai Wool

Rank estimation is an important tool for a side-channel evaluations laboratories. It allows estimating the remaining security after an attack has been performed, quantified as the time complexity and the memory consumption required to brute force the key given the leakages as probability distributions over $d$ subkeys (usually key bytes). These estimations are particularly useful where the key is not reachable with exhaustive search.
We propose ESrank, the first rank estimation algorithm that enjoys provable poly-logarithmic time- and space-complexity, which also achieves excellent practical performance. Our main idea is to use exponential sampling to drastically reduce the algorithm's complexity. Importantly, ESrank is simple to build from scratch, and requires no algorithmic tools beyond a sorting function. After rigorously bounding the accuracy, time and space complexities, we evaluated the performance of ESrank on a real SCA data corpus, and compared it to the currently-best histogram-based algorithm. We show that ESrank gives excellent rank estimation (with roughly a 1-bit margin between lower and upper bounds), with a performance that is on-par with the Histogram algorithm: a run-time of under 1 second on a standard laptop using 6.5 MB RAM.

ePrint Report
Output Compression, MPC, and iO for Turing Machines
Saikrishna Badrinarayanan, Rex Fernando, Venkata Koppula, Amit Sahai, Brent Waters

In this work, we study the fascinating notion of output-compressing randomized encodings for Turing Machines, in a shared randomness model. In this model, the encoder and decoder have access to a shared random string, and the efficiency requirement is, the size of the encoding must be independent of the running time and output length of the Turing Machine on the given input, while the length of the shared random string is allowed to grow with the length of the output. We show how to construct output- compressing randomized encodings for Turing machines in the shared randomness model, assuming iO for circuits and any assumption in the set {LWE, DDH, Nth Residuosity}.

We then show interesting implications of the above result to basic feasibility questions in the areas of secure multiparty computation (MPC) and indistinguishability obfuscation (iO):

1. Compact MPC for Turing Machines in the Random Oracle Model: In the context of MPC, we consider the following basic feasibility question: does there exist a malicious-secure MPC protocol for Turing Machines whose communication complexity is independent of the running time and output length of the Turing Machine when executed on the combined inputs of all parties? We call such a protocol as a compact MPC protocol. Hubacek and Wichs [HW15] showed via an incompressibility argument, that, even for the restricted setting of circuits, it is impossible to construct a malicious secure two party computation protocol in the plain model where the communication complexity is independent of the output length. In this work, we show how to evade this impossibility by compiling any (non-compact) MPC protocol in the plain model to a compact MPC protocol for Turing Machines in the Random Oracle Model, assuming output-compressing randomized encodings in the shared randomness model.

2. Succinct iO for Turing Machines in the Shared Randomness Model: In all existing constructions of iO for Turing Machines, the size of the obfuscated program grows with a bound on the input length. In this work, we show how to construct an iO scheme for Turing Machines in the shared randomness model where the size of the obfuscated program is independent of a bound on the input length, assuming iO for circuits and any assumption in the set {LWE, DDH, Nth Residuosity}.

We then show interesting implications of the above result to basic feasibility questions in the areas of secure multiparty computation (MPC) and indistinguishability obfuscation (iO):

1. Compact MPC for Turing Machines in the Random Oracle Model: In the context of MPC, we consider the following basic feasibility question: does there exist a malicious-secure MPC protocol for Turing Machines whose communication complexity is independent of the running time and output length of the Turing Machine when executed on the combined inputs of all parties? We call such a protocol as a compact MPC protocol. Hubacek and Wichs [HW15] showed via an incompressibility argument, that, even for the restricted setting of circuits, it is impossible to construct a malicious secure two party computation protocol in the plain model where the communication complexity is independent of the output length. In this work, we show how to evade this impossibility by compiling any (non-compact) MPC protocol in the plain model to a compact MPC protocol for Turing Machines in the Random Oracle Model, assuming output-compressing randomized encodings in the shared randomness model.

2. Succinct iO for Turing Machines in the Shared Randomness Model: In all existing constructions of iO for Turing Machines, the size of the obfuscated program grows with a bound on the input length. In this work, we show how to construct an iO scheme for Turing Machines in the shared randomness model where the size of the obfuscated program is independent of a bound on the input length, assuming iO for circuits and any assumption in the set {LWE, DDH, Nth Residuosity}.

ePrint Report
Multiplicative Masking for AES in Hardware
Lauren De Meyer, Oscar Reparaz, Begül Bilgin

Hardware masked AES designs usually rely on Boolean masking and perform the computation of the S-box using the tower-field decomposition. On the other hand, splitting sensitive variables in a multiplicative way is more amenable for the computation of the AES S-box, as noted by Akkar and Giraud. However, multiplicative masking needs to be implemented carefully not to be vulnerable to first-order DPA with a zero-value power model. Up to now, sound higher-order multiplicative masking schemes have been implemented only in software. In this work, we demonstrate the first hardware implementation of AES using multiplicative masks. The method is tailored to be secure even if the underlying gates are not ideal and glitches occur in the circuit. We detail the design process of first- and second-order secure AES-128 cores, which result in the smallest die area to date among previous state-of-the-art masked AES implementations with comparable randomness cost and latency. The first- and second-order masked implementations improve resp. 29% and 18% over these designs. We deploy our construction on a Spartan-6 FPGA and perform a side-channel evaluation. No leakage is detected with up to 50 million traces for both our first- and second-order implementation. For the latter, this holds both for univariate and bivariate analysis.

Mixing Networks are protocols that allow a set of senders to send messages anonymously.
Such protocols are fundamental building blocks to achieve privacy in a variety of applications, such as anonymous e-mail, anonymous payments, and electronic voting.

Back in 2002, Golle et al. proposed a new concept of mixing network, called optimistic mixing, that allows for fast mixing when all the parties execute the protocol honestly. If, on the other hand, one or more mix-servers cheat, then the attack is recognized and one can back up to a different, slow mix-net.

Unfortunately, Abe and Imai (ACISP'03) and independently WikstrÃ¶m (SAC'03) showed several major flaws in the optimistic protocol of Golle et al. In this work, we give another look at optimistic mixing networks. Our contribution is mainly threefold. First, we give formal definitions for optimistic mixing in the UC model. Second, we propose a compiler for obtaining a UC-secure mixing network by combining an optimistic mixing with a traditional mixing protocol as backup mixing. Third, we propose an efficient UC-secure realization of optimistic mixing based on the DDH assumption in the non-programmable random oracle model. As a key ingredient of our construction, we give a new randomizable replayable-CCA secure public key encryption (PKE) that outperforms in efficiency all previous schemes. We believe this result is of independent interest.

Back in 2002, Golle et al. proposed a new concept of mixing network, called optimistic mixing, that allows for fast mixing when all the parties execute the protocol honestly. If, on the other hand, one or more mix-servers cheat, then the attack is recognized and one can back up to a different, slow mix-net.

Unfortunately, Abe and Imai (ACISP'03) and independently WikstrÃ¶m (SAC'03) showed several major flaws in the optimistic protocol of Golle et al. In this work, we give another look at optimistic mixing networks. Our contribution is mainly threefold. First, we give formal definitions for optimistic mixing in the UC model. Second, we propose a compiler for obtaining a UC-secure mixing network by combining an optimistic mixing with a traditional mixing protocol as backup mixing. Third, we propose an efficient UC-secure realization of optimistic mixing based on the DDH assumption in the non-programmable random oracle model. As a key ingredient of our construction, we give a new randomizable replayable-CCA secure public key encryption (PKE) that outperforms in efficiency all previous schemes. We believe this result is of independent interest.

ePrint Report
Helix: A Scalable and Fair Consensus Algorithm Resistant to Ordering Manipulation
Avi Asayag, Gad Cohen, Ido Grayevsky, Maya Leshkowitz, Ori Rottenstreich, Ronen Tamari, David Yakira

We present Helix, a Byzantine fault tolerant and scalable consensus algorithm for fair ordering of transactions among nodes in a distributed network. In Helix, one among the network nodes proposes a potential set of successive transactions (block). The known PBFT protocol is then run within a bounded-size committee in order to achieve agreement and commit the block to the blockchain indefinitely. In Helix, transactions are encrypted via a threshold encryption scheme in order to hide information from the ordering nodes, limiting censorship and front-running. The encryption is further used to realize a verifiable source of randomness, which in turn is used to elect the committees in an unpredictable way, as well as to introduce a correlated sampling scheme of transactions included in a proposed block. The correlated sampling scheme restricts nodes from promoting their own transactions over those of others. Nodes are elected to participate in committees in proportion to their relative reputation. Reputation, attributed to each node, serves as a measure of obedience to the protocol's instructions. Committees are thus chosen in a way that is beneficial to the protocol.

This paper studies the security of Ring Oscillator Physically Unclonable Function (PUF) with Enhanced Challenge-Response Pairs as proposed by Delavar et al. We present an attack that can predict all PUF responses after querying the PUF with n+2 attacker-chosen queries. This result renders the proposed RO-PUF with Enhanced Challenge-Response Pairs inapt for most typical PUF use cases, including but not limited to all cases where an attacker has query access.

ePrint Report
Delegating Computations with (almost) Minimal Time and Space Overhead
Justin Holmgren, Ron D. Rothblum

The problem of verifiable delegation of computation considers a setting in which a client wishes to outsource an expensive computation to a powerful, but untrusted, server. Since the client does not trust the server, we would like the server to certify the correctness of the result. Delegation has emerged as a central problem in cryptography, with a flurry of recent activity in both theory and practice. In all of these works, the main bottleneck is the overhead incurred by the server, both in time and in space.

Assuming (sub-exponential) LWE, we construct a one-round argument-system for proving the correctness of any time $T$ and space $S$ RAM computation, in which both the verifier and prover are highly efficient. The verifier runs in time $n \cdot polylog(T)$ and space $polylog(T)$, where $n$ is the input length. Assuming $S \geq \max(n,polylog(T))$, the prover runs in time $\tilde{O}(T)$ and space $S + o(S)$, and in many natural cases even $S+polylog(T)$. Our solution uses somewhat homomorphic encryption but, surprisingly, only requires homomorphic evaluation of arithmetic circuits having multiplicative depth (which is a main bottleneck in homomorphic encryption) $\log_2\log(T)+O(1)$.

Prior works based on standard assumptions had a $T^c$ time prover, where $c \geq 3$ (at the very least). As for the space usage, we are unaware of any work, even based on non-standard assumptions, that has space usage $S+polylog(T)$.

Along the way to constructing our delegation scheme, we introduce several technical tools that we believe may be useful for future work.

Assuming (sub-exponential) LWE, we construct a one-round argument-system for proving the correctness of any time $T$ and space $S$ RAM computation, in which both the verifier and prover are highly efficient. The verifier runs in time $n \cdot polylog(T)$ and space $polylog(T)$, where $n$ is the input length. Assuming $S \geq \max(n,polylog(T))$, the prover runs in time $\tilde{O}(T)$ and space $S + o(S)$, and in many natural cases even $S+polylog(T)$. Our solution uses somewhat homomorphic encryption but, surprisingly, only requires homomorphic evaluation of arithmetic circuits having multiplicative depth (which is a main bottleneck in homomorphic encryption) $\log_2\log(T)+O(1)$.

Prior works based on standard assumptions had a $T^c$ time prover, where $c \geq 3$ (at the very least). As for the space usage, we are unaware of any work, even based on non-standard assumptions, that has space usage $S+polylog(T)$.

Along the way to constructing our delegation scheme, we introduce several technical tools that we believe may be useful for future work.

ePrint Report
Encrypted Databases for Differential Privacy
Archita Agarwal, Maurice Herlihy, Seny Kamara, Tarik Moataz

The problem of privatizing statistical databases is a well-studied topic that
has culminated with the notion of differential privacy. The complementary
problem of securing these databases, however, has---as far as
we know---not been considered in the past. While the security of private
databases is in theory orthogonal to the problem of private statistical
analysis (e.g., in the central model of differential privacy the curator is
trusted) the recent real-world deployments of differentially-private systems
suggest that it will become a problem of increasing importance.
In this work, we consider the problem of designing encrypted databases (EDB)
that support differentially-private statistical queries. More precisely, these
EDBs should support a set of encrypted operations with which a curator can securely
query and manage its data, and a set of private operations with which an
analyst can privately analyze the data. Using such an EDB, a curator can
securely outsource its database to an untrusted server (e.g., on-premise or in
the cloud) while still allowing an analyst to privately query it.

We show how to design an EDB that supports private histogram queries. As a building block, we introduce a differentially-private encrypted counter based on the binary mechanism of Chan et al. (ICALP, 2010). We then carefully combine multiple instances of this counter with a standard encrypted database scheme to support differentially-private histogram queries.

We show how to design an EDB that supports private histogram queries. As a building block, we introduce a differentially-private encrypted counter based on the binary mechanism of Chan et al. (ICALP, 2010). We then carefully combine multiple instances of this counter with a standard encrypted database scheme to support differentially-private histogram queries.

ePrint Report
Cryptanalysis of Low-Data Instances of Full LowMCv2
Christian Rechberger, Hadi Soleimany, Tyge Tiessen

LowMC is a family of block ciphers designed for a low multiplicative complexity. The specification allows a large variety of instantiations, differing in block size, key size, number of S-boxes applied per round and allowed data complexity. The number of rounds deemed secure is determined by evaluating a number of attack vectors and taking the number of rounds still secure against the best of these.

In this paper, we demonstrate that the attacks considered by the designers of LowMC in the version 2 of the round-formular were not sufficient to fend off all possible attacks. In the case of instantiations of LowMC with one of the most useful settings, namely with few applied S-boxes per round and only low allowable data complexities, efficient attacks based on difference enumeration techniques can be constructed. We show that it is most effective to consider tuples of differences instead of simple differences, both to increase the range of the distinguishers and to enable key recovery attacks. All applications for LowMC we are aware of, including signature schemes like Picnic and more recent (ring/group) signature schemes have used version 3 of the round formular for LowMC, which takes our attack already into account.

In this paper, we demonstrate that the attacks considered by the designers of LowMC in the version 2 of the round-formular were not sufficient to fend off all possible attacks. In the case of instantiations of LowMC with one of the most useful settings, namely with few applied S-boxes per round and only low allowable data complexities, efficient attacks based on difference enumeration techniques can be constructed. We show that it is most effective to consider tuples of differences instead of simple differences, both to increase the range of the distinguishers and to enable key recovery attacks. All applications for LowMC we are aware of, including signature schemes like Picnic and more recent (ring/group) signature schemes have used version 3 of the round formular for LowMC, which takes our attack already into account.

ePrint Report
Stronger Security for Sanitizable Signatures
Stephan Krenn, Kai Samelin, Dieter Sommer

Sanitizable signature schemes (SSS) enable a designated party (called the sanitizer) to alter admissible
blocks of a signed message. This primitive can be used to remove or alter sensitive data from already signed
messages without involvement of the original signer.
Current state-of-the-art security definitions of SSSs only define a "weak" form of security. Namely, the unforgeability,
accountability and transparency definitions are not strong enough to be meaningful in certain use-cases. We
identify some of these use-cases, close this gap by introducing stronger definitions, and show how to alter an
existing construction to meet our desired security level. Moreover, we clarify a small yet important detail in the
state-of-the-art privacy definition. Our work allows to deploy this primitive in more and different scenarios.

20
September
2018

ePrint Report
Raptor: A Practical Lattice-Based (Linkable) Ring Signature
Xingye Lu, Man Ho Au, Zhenfei Zhang

We present (linkable) Raptor, the first lattice-based (link-
able) ring signature that is practical. Our scheme is as fast as classical
solutions; while the size of the signature is roughly 1.3 KB per user.
Our designs are based on a completely new generic construction that is
provable secure in random oracle model. Prior to our work, all existing
lattice-based solutions are analogues of their discrete-log or pairing-based
counterparts. We give instantiations to both standard lattice setting, as
a proof of concept, and NTRU lattice, as an efficient instantiation. Our
main building block is a so called Chameleon Hash Plus (CH+) function,
which may be of independent research interest.

ePrint Report
Measuring, simulating and exploiting the head concavity phenomenon in BKZ
Shi Bai, Damien Stehlé, Weiqiang Wen

The Blockwise-Korkine-Zolotarev (BKZ) lattice reduction algorithm is central in cryptanalysis, in particular for lattice-based cryptography. A precise understanding of its practical behavior in terms of run-time and output quality is necessary for parameter selection in cryptographic design. As the provable worst-case bounds poorly reflect the practical behavior, cryptanalysts rely instead on the heuristic BKZ simulator of Chen and Nguyen (Asiacrypt'11). It fits better with practical experiments, but not entirely. In particular, it over-estimates the norm of the first few vectors in the output basis. Put differently, BKZ performs better than its Chen-Nguyen simulation.

In this work, we first report experiments providing more insight on this shorter-than-expected phenomenon. We then propose a refined BKZ simulator by taking the distribution of short vectors in random lattices into consideration. We report experiments suggesting that this refined simulator more accurately predicts the concrete behavior of BKZ. Furthermore, we design a new BKZ variant that exploits the shorter-than-expected phenomenon. For the same cost assigned to the underlying SVP-solver, the new BKZ variant produces bases of better quality. We further illustrate its potential impact by testing it on the SVP-120 instance of the Darmstadt lattice challenge.

In this work, we first report experiments providing more insight on this shorter-than-expected phenomenon. We then propose a refined BKZ simulator by taking the distribution of short vectors in random lattices into consideration. We report experiments suggesting that this refined simulator more accurately predicts the concrete behavior of BKZ. Furthermore, we design a new BKZ variant that exploits the shorter-than-expected phenomenon. For the same cost assigned to the underlying SVP-solver, the new BKZ variant produces bases of better quality. We further illustrate its potential impact by testing it on the SVP-120 instance of the Darmstadt lattice challenge.

ePrint Report
On the Security of the PKCS#1 v1.5 Signature Scheme
Tibor Jager, Saqib A. Kakvi, Alexander May

The RSA PKCS#1 v1.5 signature algorithm is the most widely used digital signature scheme in practice. Its two main strengths are its extreme simplicity, which makes it very easy to implement, and that verification of signatures is significantly faster than for DSA or ECDSA. Despite the huge practical importance of RSA PKCS#1 v1.5 signatures, providing formal evidence for their security based on plausible cryptographic hardness assumptions has turned out to be very difficult. Therefore the most recent version of PKCS#1 (RFC 8017) even recommends a replacement the more complex and less efficient scheme RSA-PSS, as it is provably secure and therefore considered more robust. The main obstacle is that RSA PKCS#1 v1.5 signatures use a deterministic padding scheme, which makes standard proof techniques not applicable.

We introduce a new technique that enables the first security proof for RSA-PKCS#1 v1.5 signatures. We prove full existential unforgeability against adaptive chosen-message attacks (EUF-CMA) under the standard RSA assumption. Furthermore, we give a tight proof under the Phi-Hiding assumption. These proofs are in the random oracle model and the parameters deviate slightly from the standard use, because we require a larger output length of the hash function. However, we also show how RSA-PKCS#1 v1.5 signatures can be instantiated in practice such that our security proofs apply.

In order to draw a more complete picture of the precise security of RSA PKCS#1 v1.5 signatures, we also give security proofs in the standard model, but with respect to weaker attacker models (key-only attacks) and based on known complexity assumptions. The main conclusion of our work is that from a provable security perspective RSA PKCS#1 v1.5 can be safely used, if the output length of the hash function is chosen appropriately.

We introduce a new technique that enables the first security proof for RSA-PKCS#1 v1.5 signatures. We prove full existential unforgeability against adaptive chosen-message attacks (EUF-CMA) under the standard RSA assumption. Furthermore, we give a tight proof under the Phi-Hiding assumption. These proofs are in the random oracle model and the parameters deviate slightly from the standard use, because we require a larger output length of the hash function. However, we also show how RSA-PKCS#1 v1.5 signatures can be instantiated in practice such that our security proofs apply.

In order to draw a more complete picture of the precise security of RSA PKCS#1 v1.5 signatures, we also give security proofs in the standard model, but with respect to weaker attacker models (key-only attacks) and based on known complexity assumptions. The main conclusion of our work is that from a provable security perspective RSA PKCS#1 v1.5 can be safely used, if the output length of the hash function is chosen appropriately.

ePrint Report
Multi-party Poisoning through Generalized $p$-Tampering
Saeed Mahloujifar, Mahammad Mahmoody, Ameer Mohammed

In a poisoning attack against a learning algorithm, an adversary tampers with a fraction of the training data $T$ with the goal of increasing the classification error of the constructed hypothesis/model over the final test distribution. In the distributed setting, $T$ might be gathered gradually from $m$ data providers $P_1,\dots,P_m$ who generate and submit their shares of $T$ in an online way.

In this work, we initiate a formal study of $(k,p)$-poisoning attacks in which an adversary controls $k\in[n]$ of the parties, and even for each corrupted party $P_i$, the adversary submits some poisoned data $T'_i$ on behalf of $P_i$ that is still "$(1-p)$-close" to the correct data $T_i$ (e.g., $1-p$ fraction of $T'_i$ is still honestly generated). For $k=m$, this model becomes the traditional notion of poisoning, and for $p=1$ it coincides with the standard notion of corruption in multi-party computation.

We prove that if there is an initial constant error for the generated hypothesis $h$, there is always a $(k,p)$-poisoning attacker who can decrease the confidence of $h$ (to have a small error), or alternatively increase the error of $h$, by $\Omega(p \cdot k/m)$. Our attacks can be implemented in polynomial time given samples from the correct data, and they use no wrong labels if the original distributions are not noisy.

At a technical level, we prove a general lemma about biasing bounded functions $f(x_1,\dots,x_n)\in[0,1]$ through an attack model in which each block $x_i$ might be controlled by an adversary with marginal probability $p$ in an online way. When the probabilities are independent, this coincides with the model of $p$-tampering attacks, thus we call our model generalized $p$-tampering. We prove the power of such attacks by incorporating ideas from the context of coin-flipping attacks into the $p$-tampering model and generalize the results in both of these areas.

In this work, we initiate a formal study of $(k,p)$-poisoning attacks in which an adversary controls $k\in[n]$ of the parties, and even for each corrupted party $P_i$, the adversary submits some poisoned data $T'_i$ on behalf of $P_i$ that is still "$(1-p)$-close" to the correct data $T_i$ (e.g., $1-p$ fraction of $T'_i$ is still honestly generated). For $k=m$, this model becomes the traditional notion of poisoning, and for $p=1$ it coincides with the standard notion of corruption in multi-party computation.

We prove that if there is an initial constant error for the generated hypothesis $h$, there is always a $(k,p)$-poisoning attacker who can decrease the confidence of $h$ (to have a small error), or alternatively increase the error of $h$, by $\Omega(p \cdot k/m)$. Our attacks can be implemented in polynomial time given samples from the correct data, and they use no wrong labels if the original distributions are not noisy.

At a technical level, we prove a general lemma about biasing bounded functions $f(x_1,\dots,x_n)\in[0,1]$ through an attack model in which each block $x_i$ might be controlled by an adversary with marginal probability $p$ in an online way. When the probabilities are independent, this coincides with the model of $p$-tampering attacks, thus we call our model generalized $p$-tampering. We prove the power of such attacks by incorporating ideas from the context of coin-flipping attacks into the $p$-tampering model and generalize the results in both of these areas.

ePrint Report
Towards a Smart Contract-based, Decentralized, Public-Key Infrastructure
Christos Patsonakis, Katerina Samari , Mema Roussopoulos , Aggelos Kiayias

Public-key infrastructures (PKIs) are an integral part of the security foundations of digital communications. Their widespread deployment has allowed the growth of important applications, such as, internet banking and e-commerce. Centralized PKIs (CPKIs) rely on a hierarchy of trusted Certification Authorities (CAs) for issuing, distributing and managing the status of digital certificates, i.e., unforgeable data structures that attest to the authenticity of an entity's public key. Unfortunately, CPKIs have many downsides in terms of
security and fault tolerance and there have been
numerous security incidents throughout the years. Decentralized PKIs (DPKIs) were proposed to deal with these issues as they rely on multiple, independent nodes. Nevertheless, decentralization raises other concerns such as what are the incentives for the participating nodes to ensure the service's availability.

In our work, we leverage the scalability, as well as, the built-in incentive mechanism of blockchain systems and propose a smart contract-based DPKI. The main barrier in realizing a smart contract-based DPKI is the size of the contract's state which, being its most expensive resource to access, should be minimized for a construction to be viable. We resolve this problem by proposing and using in our DPKI a public-state cryptographic accumulator with constant size, a cryptographic tool which may be of independent interest in the context of blockchain protocols. We also are the first to formalize the DPKI design problem in the Universal Composability (UC) framework and formally prove the security of our construction under the strong RSA assumption in the Random Oracle model and the existence of an ideal smart contract functionality.

In our work, we leverage the scalability, as well as, the built-in incentive mechanism of blockchain systems and propose a smart contract-based DPKI. The main barrier in realizing a smart contract-based DPKI is the size of the contract's state which, being its most expensive resource to access, should be minimized for a construction to be viable. We resolve this problem by proposing and using in our DPKI a public-state cryptographic accumulator with constant size, a cryptographic tool which may be of independent interest in the context of blockchain protocols. We also are the first to formalize the DPKI design problem in the Universal Composability (UC) framework and formally prove the security of our construction under the strong RSA assumption in the Random Oracle model and the existence of an ideal smart contract functionality.