## IACR News

Updates on the COVID-19 situation are on the
Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

#### 14 February 2020

###### Trondheim, Norway, 25 April - 30 April 2021

Eurocrypt
Event date: 25 April to 30 April 2021

###### Kohei Nakagawa, Hiroshi Onuki, Atsushi Takayasu, Tsuyoshi Takagi

ePrint Report
Isogeny-based cryptography is a kind of post-quantum cryptography whose security relies on the hardness of an isogeny problem over elliptic curves. In this paper, we study
CSIDH, which is one of isogeny-based cryptography presented by Castryck et al. in Asiacrypt 2018. In CSIDH, the secret key is taken from an $L_\infty$-norm ball of integer vectors and the public key is generated by calculating the action of an ideal class corresponding to a secret key. For faster key exchange, it is important to accelerate the algorithm calculating the action of the ideal class group, many such approaches have been studied recently. Several papers showed that CSIDH becomes more efficient when a secret key space is changed to weighted $L_\infty$-norm ball.
In this paper, we revisit the approach and try to find an optimal secret key space which minimizes the computational cost of the group action. At first, we obtain an optimal secret key space by analyzing computational cost of CSIDH with respect to the number of operations on $\mathbb{F}_p$. Since the optimal key space is too complicated to sample a secret key uniformly, we approximate the optimal key space by using $L_1$-norm ball and propose algorithms for uniform sampling with some precomputed table. By experiment with CSIDH-512, we show that the computational cost of the $L_1$-norm ball is reduced by about 20\% compared to that of the $L_\infty$-norm ball, using a precomputed table of 160 Kbytes. The cost is only 1.08 times of the cost of the optimal secret key space. Finally, we also discuss possible sampling algorithms using other norm balls and their efficiency.

###### Prabhanjan Ananth, Abhishek Jain, ZhengZhong Jin, Giulio Malavolta

ePrint Report
We construct a multikey fully-homomorphic encryption scheme (multikey FHE) with one-round threshold decryption in the plain model, i.e. without a trusted setup, assuming the intractability of standard problems over lattices. Prior to our work, such a scheme was only known to exist assuming sub-exponentially hard indistinguishability obfuscation.

###### Nathan Keller, Asaf Rosemarin

ePrint Report
The HADES design strategy combines the classical SPN construction with the Partial SPN (PSPN) construction, in which at every encryption round, the non-linear layer is applied to only a part of the state. In a HADES design, a middle layer that consists of PSPN rounds is surrounded by outer layers of SPN rounds. The security arguments of HADES with respect to statistical attacks use only the SPN rounds, disregarding the PSPN rounds. This allows the designers to not pose any restriction on the MDS matrix used as the linear mixing operation.

In this paper we show that the choice of the MDS matrix significantly affects the security level provided by HADES designs. If the MDS is chosen properly, then the security level of the scheme against differential and linear attacks is significantly higher than claimed by the designers. On the other hand, weaker choices of the MDS allow for extremely large invariant subspaces that pass the entire middle layer without activating any non-linear operation (a.k.a. S-box).

We showcase our results on the Starkad and Poseidon instantiations of HADES. For Poseidon, we significantly improve the lower bounds on the number of active S-boxes with respect to both differential and linear cryptanalysis provided by the designers -- for example, from 28 to 60 active S-boxes for the t=6 variant. For Starkad, we show that the t=24 variant proposed by the designers admits an invariant subspace of a huge size of $2^{1134}$ that passes any number of PSPN rounds without activating any S-box. Furthermore, we show that the problem can be fixed easily by replacing t with any value that is not divisible by four.

In this paper we show that the choice of the MDS matrix significantly affects the security level provided by HADES designs. If the MDS is chosen properly, then the security level of the scheme against differential and linear attacks is significantly higher than claimed by the designers. On the other hand, weaker choices of the MDS allow for extremely large invariant subspaces that pass the entire middle layer without activating any non-linear operation (a.k.a. S-box).

We showcase our results on the Starkad and Poseidon instantiations of HADES. For Poseidon, we significantly improve the lower bounds on the number of active S-boxes with respect to both differential and linear cryptanalysis provided by the designers -- for example, from 28 to 60 active S-boxes for the t=6 variant. For Starkad, we show that the t=24 variant proposed by the designers admits an invariant subspace of a huge size of $2^{1134}$ that passes any number of PSPN rounds without activating any S-box. Furthermore, we show that the problem can be fixed easily by replacing t with any value that is not divisible by four.

###### Santosh Ghosh, Luis S Kida, Soham Jayesh Desai, Reshma Lal

ePrint Report
This paper proposes a method to protect DMA data transfer that can be used to offload computation to an accelerator. The proposal minimizes changes in the hardware platform and to the application and SW stack. The paper de-scribes the end-to-end scheme to protect communication between an appli-cation running inside a SGX enclave and a FPGA accelerator optimized for bandwidth and latency and details the implementation of AES-GCM hard-ware engines with high bandwidth and low latency.

###### Christian Badertscher, Ueli Maurer, Christopher Portmann, Guilherme Rito

ePrint Report
This paper takes a fresh approach to systematically characterizing,
comparing, and understanding CCA-type security definitions for
public-key encryption (PKE), a topic with a long history. The
justification for a concrete security definition $X$ is relative to a
benchmark application (e.g. confidential communication): Does the use
of a PKE scheme satisfying $X$ imply the security of the application?
Because unnecessarily strong definitions may lead to unnecessarily
inefficient schemes or unnecessarily strong computational assumptions,
security definitions should be as weak as possible, i.e. as close as
possible to (but above) the benchmark. Understanding the hierarchy of
security definitions, partially ordered by the implication (i.e. at
least as strong) relation, is hence important, as is placing the
relevant applications as benchmark levels within the hierarchy.

CCA-2 security is apparently the strongest notion, but because it is arguably too strong, Canetti, Krawczyk, and Nielsen (Crypto 2003) proposed the relaxed notions of Replayable CCA security (RCCA) as perhaps the weakest meaningful definition, and they investigated the space between CCA and RCCA security by proposing two versions of Detectable RCCA (d-RCCA) security which are meant to ensure that replays of ciphertexts are either publicly or secretly detectable (and hence preventable).

The contributions of this paper are three-fold. First, following the work of Coretti, Maurer, and Tackmann (Asiacrypt 2013), we formalize the three benchmark applications of PKE that serve as the natural motivation for security notions, namely the construction of certain types of (possibly replay-protected) confidential channels (from an insecure and an authenticated communication channel). Second, we prove that RCCA does not achieve the confidentiality benchmark and, contrary to previous belief, that the proposed d-RCCA notions are not even relaxations of CCA-2 security. Third, we propose the natural security notions corresponding to the three benchmarks: an appropriately strengthened version of RCCA to ensure confidentiality, as well as two notions for capturing public and secret replay detectability.

CCA-2 security is apparently the strongest notion, but because it is arguably too strong, Canetti, Krawczyk, and Nielsen (Crypto 2003) proposed the relaxed notions of Replayable CCA security (RCCA) as perhaps the weakest meaningful definition, and they investigated the space between CCA and RCCA security by proposing two versions of Detectable RCCA (d-RCCA) security which are meant to ensure that replays of ciphertexts are either publicly or secretly detectable (and hence preventable).

The contributions of this paper are three-fold. First, following the work of Coretti, Maurer, and Tackmann (Asiacrypt 2013), we formalize the three benchmark applications of PKE that serve as the natural motivation for security notions, namely the construction of certain types of (possibly replay-protected) confidential channels (from an insecure and an authenticated communication channel). Second, we prove that RCCA does not achieve the confidentiality benchmark and, contrary to previous belief, that the proposed d-RCCA notions are not even relaxations of CCA-2 security. Third, we propose the natural security notions corresponding to the three benchmarks: an appropriately strengthened version of RCCA to ensure confidentiality, as well as two notions for capturing public and secret replay detectability.

###### Eugene Frimpong, Alexandros Bakas, Hai-Van Dang, Antonis Michalas

ePrint Report
Symmetric Searchable Encryption (SSE) allows the outsourcing of encrypted data to possible untrusted third party services while simultaneously giving the opportunity to users to search over the encrypted data in a secure and privacy-preserving way. Currently, the majority of SSE schemes have been designed to fit a typical cloud service scenario where users (clients) encrypt their data locally and upload them securely to a remote location. While this scenario fits squarely the cloud paradigm, it cannot apply to the emerging field of Internet of Things (IoT). This is due to the fact that the performance of most of the existing SSE schemes has been tested using powerful machines and not the constrained devices used in IoT services. The focus of this paper is to prove that SSE schemes can, under certain circumstances, work on constrained devices and eventually be adopted by IoT services. To this end, we designed and implemented a forward private dynamic SSE scheme that can run smoothly on resource-constrained devices. To do so, we adopted a fog node scenario where edge (constrained) devices sense data, encrypt them locally and use the capabilities of fog nodes to store sensed data in a remote location (the cloud). Consequently, end users can search for specific keywords over the stored ciphertexts without revealing anything about their content. Our scheme achieves efficient computational operations and supports the multi-client model. The performance of the scheme is evaluated by conducting extensive experiments. Finally, the security of the scheme is proven through a theoretical analysis that considers the existence of a malicious adversary.

###### Stefan Dziembowski, Grzegorz Fabiański, Sebastian Faust, Siavash Riahi

ePrint Report
Most of blockchains do not scale well, i.e., they cannot process quickly large amounts of transactions. Moreover, using blockchains can be expensive in real life, since blockchain operations cost fees. One of the remedies for these problem are \emph{off-chain} (or: \emph{Layer-2}) protocols where the massive bulk of transactions is kept outside of the main blockchain. In the optimistic case, off-chain protocols drastically improve scalability, since typically the users only need to communicate with the blockchain when they enter, or when they exit the system. In the pessimistic case when parties are malicious a ``smart contract'' running on the underlying blockchain guarantees that no coins are stolen.

In this work we initiate the study of the inherent limitations of off-chain protocols. Concretely, we investigate the so-called \emph{Plasma} systems (also called ``commit chains''), and show that malicious parties can always launch an attack that forces the honest parties to communicate large amounts of data to the blockchain. More concretely: the adversary can always (a) either force the honest parties to communicate a lot with the blockchain, even though they did not intend to (this is traditionally called \emph{mass exit}); or (b) an honest party that wants to leave the system needs to quickly communicate large amounts of data to the blockchain. What makes these attacks particularly hard to handle in real life (and also making our result stronger) is that these attacks do not have so-called \emph{uniquely attributable faults}, i.e.~the smart contract cannot determine which party is malicious, and hence cannot force it to pay the fees for the blockchain interaction.

An important implication of our result is that the benefits of two of the most prominent Plasma types, called \emph{Plasma Cash} and \emph{Fungible Plasma}, cannot be achieved simultaneously. Our results apply to every Plasma system, and cannot be circumvent by introducing additional cryptographic assumptions.

In this work we initiate the study of the inherent limitations of off-chain protocols. Concretely, we investigate the so-called \emph{Plasma} systems (also called ``commit chains''), and show that malicious parties can always launch an attack that forces the honest parties to communicate large amounts of data to the blockchain. More concretely: the adversary can always (a) either force the honest parties to communicate a lot with the blockchain, even though they did not intend to (this is traditionally called \emph{mass exit}); or (b) an honest party that wants to leave the system needs to quickly communicate large amounts of data to the blockchain. What makes these attacks particularly hard to handle in real life (and also making our result stronger) is that these attacks do not have so-called \emph{uniquely attributable faults}, i.e.~the smart contract cannot determine which party is malicious, and hence cannot force it to pay the fees for the blockchain interaction.

An important implication of our result is that the benefits of two of the most prominent Plasma types, called \emph{Plasma Cash} and \emph{Fungible Plasma}, cannot be achieved simultaneously. Our results apply to every Plasma system, and cannot be circumvent by introducing additional cryptographic assumptions.

###### Mohammad Zaheri, Adam O'Neill

ePrint Report
Classically, selective-opening attack (SOA) has been studied for randomized primitives, like randomized encryption schemes and commitments. The study of SOA for deterministic primitives, which presents some unique challenges, was initiated by Bellare et al. (PKC 2015), who showed negative results. Subsequently, Hoang et al. (ASIACRYPT 2016) showed positive results in the non-programmable random oracle model. Here we show the first positive results for SOA security of deterministic primitives in the standard (RO devoid) model. Our results are:
\begin{itemize}
\item Any $2t$-wise independent hash function is SOA secure for an unbounded number of ``$t$-correlated'' messages, meaning any group of up to $t$ messages are arbitrarily correlated.
\item An analogous result for deterministic encryption, from close variant of a NPROM scheme proposed by Hoang et al.
\item We connect the one-more-RSA problem of Bellare et al. (J.~Cryptology 2003) to this context and demonstrate this problem is hard under the $\Phi$-Hiding Assumption with large enough encryption exponent.
\end{itemize}
Our results indicate that SOA for deterministic primitives in the standard model is more tractable than prior work would indicate.

###### Dimitris Karakostas, Aggelos Kiayias

ePrint Report
Distributed ledgers based on the Proof-of-Work (PoW) paradigm are typically most vulnerable when mining participation is low. During these periods an attacker can mount devastating attacks, such as double spending or censorship of transactions. Checkpointing has been proposed as a mechanism to mitigate such 51% attacks. The core idea is to employ an external set of parties that securely run an assisting service which guarantees the ledger's properties and can be relied upon at times when the invested hashing power is low. We realize the assisting service in two ways, via checkpointing and timestamping, and show that a ledger, which employs either, is secure with high probability, even in the presence of an adversarial mining majority. We put forth the first rigorous study of checkpointing as a mechanism to protect PoW ledgers from 51% attacks. Notably, our design is the first to offer both consistency and liveness guarantees, even under adversarial mining majorities. Our liveness analysis also identifies a previously undocumented attack, namely front-running, which enables Denial-of-Service against existing checkpointed ledger systems. We showcase the liveness guarantees of our mechanism by evaluating the checkpointed version of Ethereum Classic, a blockchain which recently suffered a 51% attack, and build a federated distributed checkpointing service, which provides high assurance with low performance requirements. Finally, we prove the security of our timestamping mechanism, build a fully decentralized timestamping solution, by utilizing a secure distributed ledger, and evaluate its performance on the existing Bitcoin and Ethereum systems.

###### Daan Leermakers, Boris Skoric

ePrint Report
We re-visit Unclonable Encryption as introduced by Gottesman in 2003. We introduce two qubit-based prepare-and-measure Unclonable Encryption schemes with re-usable keys. In our first scheme there is no classical communication from Alice to Bob. Over a noisy channel its communication rate is lower than in Quantum Key Recycling schemes that lack unclonability. Our second scheme needs more rounds but has the advantage that it achieves the same rate as Quantum Key Distribution.

We provide security proofs for both our schemes, based on the diamond norm distance, taking noise into account.

We provide security proofs for both our schemes, based on the diamond norm distance, taking noise into account.

###### Martine De Cock, Rafael Dowsley, Anderson C. A. Nascimento, Davis Railsback, Jianwei Shen, Ariel Todoki

ePrint Report
In this paper, we present a secure logistic regression training protocol and its implementation, with a new subprotocol to securely compute the activation function. To the best of our knowledge, we present the fastest existing secure Multi-Party Computation implementation for training logistic regression models on high dimensional genome data distributed across a local area network.

###### Saikrishna Badrinarayanan, James Bartusek, Sanjam Garg, Daniel Masny, Pratyay Muhkerjee

ePrint Report
We present a reusable two-round multi-party computation (MPC) protocol from the Decisional Diffie
Hellman assumption (DDH). In particular, we show how to upgrade any secure two-round MPC protocol
to allow reusability of its first message across multiple computations, using Homomorphic Secret Sharing
(HSS) and pseudorandom functions in NC1— each of which can be instantiated from DDH.

In our construction, if the underlying two-round MPC protocol is secure against semi-honest adversaries (in the plain model) then so is our reusable two-round MPC protocol. Similarly, if the underlying two-round MPC protocol is secure against malicious adversaries (in the common random/reference string model) then so is our reusable two-round MPC protocol. Previously, such reusable two-round MPC protocols were only known under assumptions on lattices.

At a technical level, we show how to upgrade any two-round MPC protocol to a first message succinct two-round MPC protocol, where the first message of the protocol is generated independently of the computed circuit (though it is not reusable). This step uses homomorphic secret sharing (HSS) and low-depth pseudorandom functions. Next, we show a generic transformation that upgrades any first message succinct two-round MPC to allow for reusability of its first message.

In our construction, if the underlying two-round MPC protocol is secure against semi-honest adversaries (in the plain model) then so is our reusable two-round MPC protocol. Similarly, if the underlying two-round MPC protocol is secure against malicious adversaries (in the common random/reference string model) then so is our reusable two-round MPC protocol. Previously, such reusable two-round MPC protocols were only known under assumptions on lattices.

At a technical level, we show how to upgrade any two-round MPC protocol to a first message succinct two-round MPC protocol, where the first message of the protocol is generated independently of the computed circuit (though it is not reusable). This step uses homomorphic secret sharing (HSS) and low-depth pseudorandom functions. Next, we show a generic transformation that upgrades any first message succinct two-round MPC to allow for reusability of its first message.

###### Prabhanjan Ananth, Abhishek Jain, Zhengzhong Jin

ePrint Report
The notion of threshold multi-key fully homomorphic encryption (TMK-FHE) [Lopez-Alt, Tromer, Vaikuntanathan, STOC'12] was proposed as a generalization of fully homomorphic encryption to the multiparty setting. In a TMK-FHE scheme for $n$ parties, each party can individually choose a key pair and use it to encrypt its own private input. Given $n$ ciphertexts computed in this manner, the parties can homomorphically evaluate a circuit $C$ over them to obtain a new ciphertext containing the output of $C$, which can then be decrypted via a threshold decryption protocol. The key efficiency property is that the size of the (evaluated) ciphertext is independent of the size of the circuit.

TMK-FHE with one-round threshold decryption, first constructed by Mukherjee and Wichs [Eurocrypt'16], has found several powerful applications in cryptography over the past few years. However, an important drawback of all such TMK-FHE schemes is that they require a common setup which results in applications in the common random string model.

To address this concern, we propose a notion of multiparty homomorphic encryption (MHE) that retains the communication efficiency property of TMK-FHE, but sacrifices on the efficiency of final decryption. Specifically, MHE is defined in a similar manner as TMK-FHE, except that the final output computation process performed locally by each party is ``non-compact'' in that we allow its computational complexity to depend on the size of the circuit. We observe that this relaxation does not have a significant bearing in many important applications of TMK-FHE.

Our main contribution is a construction of MHE from the learning with errors assumption in the plain model. Our scheme can be used to remove the setup in many applications of TMK-FHE. For example, it yields the first construction of low-communication reusable non-interactive MPC in the plain model. To obtain our result, we devise a recursive self-synthesis procedure to transform any ``delayed-function'' two-round MPC protocol into an MHE scheme.

TMK-FHE with one-round threshold decryption, first constructed by Mukherjee and Wichs [Eurocrypt'16], has found several powerful applications in cryptography over the past few years. However, an important drawback of all such TMK-FHE schemes is that they require a common setup which results in applications in the common random string model.

To address this concern, we propose a notion of multiparty homomorphic encryption (MHE) that retains the communication efficiency property of TMK-FHE, but sacrifices on the efficiency of final decryption. Specifically, MHE is defined in a similar manner as TMK-FHE, except that the final output computation process performed locally by each party is ``non-compact'' in that we allow its computational complexity to depend on the size of the circuit. We observe that this relaxation does not have a significant bearing in many important applications of TMK-FHE.

Our main contribution is a construction of MHE from the learning with errors assumption in the plain model. Our scheme can be used to remove the setup in many applications of TMK-FHE. For example, it yields the first construction of low-communication reusable non-interactive MPC in the plain model. To obtain our result, we devise a recursive self-synthesis procedure to transform any ``delayed-function'' two-round MPC protocol into an MHE scheme.

###### Xavier Bonnetain, Rémi Bricout, André Schrottenloher, Yixin Shen

ePrint Report
We present new classical and quantum algorithms for solving random subset-sum instances. First, we improve over the Becker-Coron-Joux algorithm (EUROCRYPT 2011) from $\widetilde{\mathcal{O}} \left(2^{0.291 n}\right)$ downto $\widetilde{\mathcal{O}} \left(2^{0.283 n}\right)$, using more general representations with values in $\{-1,0,1,2\}$.

Next, we improve the state of the art of quantum algorithms for this problem in several directions. By combining the Howgrave-Graham-Joux algorithm (EUROCRYPT 2010) and quantum search, we devise an algorithm with asymptotic cost $\widetilde{\mathcal{O}} \left(2^{0.236 n}\right)$, lower than the cost of the quantum walk based on the same classical algorithm proposed by Bernstein, Jeffery, Lange and Meurer (PQCRYPTO 2013). This algorithm has the advantage of using classical memory with quantum random access, while the previously known algorithms used the quantum walk framework, and required quantum memory with quantum random access.

We also propose new quantum walks for subset-sum, performing better than the previous best time complexity of $\widetilde{\mathcal{O}} \left(2^{0.226 n}\right)$ given by Helm and May (TQC 2018). We combine our new techniques to reach a time $\widetilde{\mathcal{O}} \left(2^{0.216 n}\right)$. This time is dependent on a heuristic on quantum walk updates, formalized by Helm and May, that is also required by the previous algorithms. We show how to partially overcome this heuristic, and we obtain an algorithm with quantum time $\widetilde{\mathcal{O}} \left(2^{0.218 n}\right)$ requiring only the standard classical subset-sum heuristics.

Next, we improve the state of the art of quantum algorithms for this problem in several directions. By combining the Howgrave-Graham-Joux algorithm (EUROCRYPT 2010) and quantum search, we devise an algorithm with asymptotic cost $\widetilde{\mathcal{O}} \left(2^{0.236 n}\right)$, lower than the cost of the quantum walk based on the same classical algorithm proposed by Bernstein, Jeffery, Lange and Meurer (PQCRYPTO 2013). This algorithm has the advantage of using classical memory with quantum random access, while the previously known algorithms used the quantum walk framework, and required quantum memory with quantum random access.

We also propose new quantum walks for subset-sum, performing better than the previous best time complexity of $\widetilde{\mathcal{O}} \left(2^{0.226 n}\right)$ given by Helm and May (TQC 2018). We combine our new techniques to reach a time $\widetilde{\mathcal{O}} \left(2^{0.216 n}\right)$. This time is dependent on a heuristic on quantum walk updates, formalized by Helm and May, that is also required by the previous algorithms. We show how to partially overcome this heuristic, and we obtain an algorithm with quantum time $\widetilde{\mathcal{O}} \left(2^{0.218 n}\right)$ requiring only the standard classical subset-sum heuristics.

#### 13 February 2020

###### Genova, Italy, 19 June 2020

Event Calendar
Event date: 19 June 2020

Submission deadline: 16 March 2020

Notification: 12 April 2020

Submission deadline: 16 March 2020

Notification: 12 April 2020

###### Jinhyun So, Basak Guler, A. Salman Avestimehr

ePrint Report
Federated learning is gaining significant interests as it enables model training over a large volume of data that is distributedly stored over many users, while protecting the privacy of the individual users. However, a major bottleneck in scaling federated learning to a large number of users is the overhead of secure model aggregation across many users. In fact, the overhead of state-of-the-art protocols for secure model aggregation grows quadratically with the number of users. We propose a new scheme, named Turbo-Aggregate, that in a network with $N$ users achieves a secure aggregation overhead of $O(N\log{N})$, as opposed to $O(N^2)$, while tolerating up to a user dropout rate of $50\%$. Turbo-Aggregate employs a multi-group circular strategy for efficient model aggregation, and leverages additive secret sharing and novel coding techniques for injecting aggregation redundancy in order to handle user dropouts while guaranteeing user privacy. We experimentally demonstrate that Turbo-Aggregate achieves a total running time that grows almost linear in the number of users, and provides up to $14\times$ speedup over the state-of-the-art schemes with upto $N=200$ users. We also experimentally evaluate the impact of several key network parameters (e.g., user dropout rate, bandwidth, and model size) on the performance of Turbo-Aggregate.

###### Stefan Dziembowski, Paweł Kędzior

ePrint Report
\emph{Off-chain channel networks} are one of the most promising technologies for dealing with blockchain scalability and delayed finality issues. Parties that are connected within such networks can send coins to each other without interacting with the blockchain. Moreover, these payments can be ``routed'' over the network. Thanks to this, even the parties that do not have a channel in common can perform payments between each other with a help of intermediaries.

In this paper, we present a new technique (that we call \emph{Dynamic Internal Payment Splitting (DIPS)}) that allows the intermediaries in the network to split the payments into several sub-payments. This can be done recursively multiple times by subsequent intermediaries. Moreover, the resulting ``payment receipts'' can be aggregated by each intermediary into one short receipt that can be propagated back in the network. We present a protocol (that we call ``Ethna'') that uses this technique. We provide a formal security definition of our protocol and we prove that Ethna satisfies it. We also implement a simple variant of Ethna in Solidity and provide some benchmarks.

In this paper, we present a new technique (that we call \emph{Dynamic Internal Payment Splitting (DIPS)}) that allows the intermediaries in the network to split the payments into several sub-payments. This can be done recursively multiple times by subsequent intermediaries. Moreover, the resulting ``payment receipts'' can be aggregated by each intermediary into one short receipt that can be propagated back in the network. We present a protocol (that we call ``Ethna'') that uses this technique. We provide a formal security definition of our protocol and we prove that Ethna satisfies it. We also implement a simple variant of Ethna in Solidity and provide some benchmarks.

###### Aron Gohr, Sven Jacob, Werner Schindler

ePrint Report
This paper has four main goals. First, we show how we solved the CHES 2018 AES challenge in the contest using essentially a linear classifier combined with a SAT solver and a custom error correction method. This part of the paper has previously appeared in a preprint of our own and later as a contribution to a preprint write-up of the solutions by the three winning teams. This solution serves as a baseline for other solutions explored in the paper.

Second, we develop a novel deep neural network architecture for side-channel analysis that completely breaks the AES challenge, allowing for fairly reliable key recovery with just a single trace on the unknown-device part of the CHES challenge. This solution significantly improves upon all previously published solutions of the AES challenge, including our baseline linear solution.

Third, we consider the question of leakage attribution for both the classifier we used in the challenge and for our deep neural network. Direct inspection of the weight vector of our machine learning model yields a lot of information on the implementation for our linear classifier. For the deep neural network, we test three other strategies (occlusion of traces; inspection of adversarial changes; knowledge distillation) and find that these can yield information on the leakage essentially equivalent to that gained by inspecting the weights of the simpler model.

Fourth, we study the properties of adversarially generated side-channel traces for our model. Partly reproducing recent work on useful features in adversarial examples in our application domain, we find that a linear classifier generalizing to an unseen device much better than our linear baseline can be trained using only adversarial examples (fresh random keys, adversarially perturbed traces) for our deep neural network. This gives a new way of extracting human-usable knowledge from a deep side channel model while also yielding insights on adversarial examples in an application domain where relatively few sources of spurious correlations between data and labels exist.

Second, we develop a novel deep neural network architecture for side-channel analysis that completely breaks the AES challenge, allowing for fairly reliable key recovery with just a single trace on the unknown-device part of the CHES challenge. This solution significantly improves upon all previously published solutions of the AES challenge, including our baseline linear solution.

Third, we consider the question of leakage attribution for both the classifier we used in the challenge and for our deep neural network. Direct inspection of the weight vector of our machine learning model yields a lot of information on the implementation for our linear classifier. For the deep neural network, we test three other strategies (occlusion of traces; inspection of adversarial changes; knowledge distillation) and find that these can yield information on the leakage essentially equivalent to that gained by inspecting the weights of the simpler model.

Fourth, we study the properties of adversarially generated side-channel traces for our model. Partly reproducing recent work on useful features in adversarial examples in our application domain, we find that a linear classifier generalizing to an unseen device much better than our linear baseline can be trained using only adversarial examples (fresh random keys, adversarially perturbed traces) for our deep neural network. This gives a new way of extracting human-usable knowledge from a deep side channel model while also yielding insights on adversarial examples in an application domain where relatively few sources of spurious correlations between data and labels exist.

###### Alex Bienstock, Allison Bishop, Eli Goldin, Garrison Grogan, Victor Lecomte

ePrint Report
In the fall of 2018, a professor became obsessed with conspiracy theories of deeper connections between discrete-log based cryptography and lattice based cryptography. That obsession metastisized and spread to some of the students in the professor's cryptography course through a cryptanalysis challenge that was set as a class competition. The students and the professor continued travelling further down the rabbit hole, refusing to stop when the semester was over. Refusing to stop even as some of the students graduated, and really refusing to stop even now, but pausing long enough to write up this chronicle of their exploits.