IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
03 June 2020
Zhen Hang Jiang, Yunsi Fei, Aidong Adam Ding, Thomas Wahl
Recent years have seen various side-channel timing attacks demonstrated on both CPUs and GPUs, in diverse settings such as desktops, clouds, and mobile systems. These attacks observe events on different shared resources on the memory hierarchy from timing information, and then infer secret-dependent memory access pattern to retrieve the secret through statistical analysis. We generalize these attacks as memory-based side-channel attacks.
In this paper, we propose a novel software countermeasure, MemPoline, against memory-based side-channel attacks. MemPoline hides the secret-dependent memory access pattern by moving sensitive data around randomly within a memory space. Compared to the prior oblivious RAM technology, MemPoline employs parameter-directed permutations to achieve randomness, which are significantly more efficient and yet provide similar security. Our countermeasure only requires modifying the source code, and has great advantages of being general - algorithm-agnostic, portable - independent of the underlying architecture, and compatible - a user-space approach that works for any operating system or hypervisor. We run a thorough evaluation of our countermeasure. We apply it to both AES, a symmetric cipher, and RSA, an asymmetric cipher. Both empirical results and theoretical analysis show that our countermeasure resists a series of existing memory-based side-channel attacks on CPUs and GPUs.
In this paper, we propose a novel software countermeasure, MemPoline, against memory-based side-channel attacks. MemPoline hides the secret-dependent memory access pattern by moving sensitive data around randomly within a memory space. Compared to the prior oblivious RAM technology, MemPoline employs parameter-directed permutations to achieve randomness, which are significantly more efficient and yet provide similar security. Our countermeasure only requires modifying the source code, and has great advantages of being general - algorithm-agnostic, portable - independent of the underlying architecture, and compatible - a user-space approach that works for any operating system or hypervisor. We run a thorough evaluation of our countermeasure. We apply it to both AES, a symmetric cipher, and RSA, an asymmetric cipher. Both empirical results and theoretical analysis show that our countermeasure resists a series of existing memory-based side-channel attacks on CPUs and GPUs.
Prastudy Fauzi, Helger Lipmaa, Zaira Pindado, Janno Siim
We define a new primitive that we call a somewhere statistically binding (SSB) commitment scheme, which is a generalization of dual-mode commitments but has similarities with SSB hash functions (Hubacek and Wichs, ITCS 2015) without local opening.
In (existing) SSB hash functions, one can compute a hash of a vector $v$ that is statistically binding in one coordinate of $v$.
Meanwhile, in SSB commitment schemes, a commitment of a vector $v$ is statistically binding in some coordinates of $v$ and is statistically hiding in the other coordinates.
The set of indices where binding holds is predetermined but known only to the commitment key generator.
We show that the primitive can be instantiated by generalizing the succinct Extended Multi-Pedersen commitment scheme (González et al., Asiacrypt 2015).
We further introduce the notion of functional SSB commitment schemes and, importantly, use it to get an efficient quasi-adaptive NIZK for arithmetic circuits and efficient oblivious database queries.
Zi-Yuan Liu, Yi-Fan Tseng, Raylin Tso
In this manuscript, we review lattice-based public-key encryption with the keyword search scheme proposed by Zhang \textit{et al}. in IEEE Transactions on Dependable and Secure Computing in 2019. We demonstrate that this scheme is insecure for inside keyword guess attacks, although Zhang \textit{et al.} demonstrated a secure proof. In addition, we identify that lattice-based proxy-oriented identity-based encryption with keyword search proposed by Zhang \textit{et al}. in Information Sciences in 2019 suffers from the same issue.
Feng Hao, Shen Wang, Samiran Bag, Rob Procter, Siamak Shahandashti, Maryam Mehrnezhad, Ehsan Toreini, Roberto Metere, Lana Liu
On 2 May 2019, during the UK local elections, an e-voting trial was conducted in Gateshead, using a touch-screen end-to-end verifiable e-voting system. This was the first trial of verifiable e-voting for polling station voting in the UK, and it presented a case study to envisage the future of e-voting.
Fuyuki Kitagawa, Takahiro Matsuda, Takashi Yamakawa
We give a construction of a non-interactive zero-knowledge (NIZK) argument for all NP languages based on a succinct non-interactive argument (SNARG) for all NP languages and a one-way function. The succinctness requirement for the SNARG is rather mild: We only require that the proof size be $|\pi|=\mathsf{poly}(\lambda)(|x|+|w|)^c$ for some constant $c<1/2$, where $|x|$ is the statement length, $|w|$ is the witness length, and $\lambda$ is the security parameter. Especially, we do not require anything about the efficiency of the verification.
Based on this result, we also give a generic conversion from a SNARG to a zero-knowledge SNARG assuming the existence of CPA secure public-key encryption. For this conversion, we require a SNARG to have efficient verification, i.e., the computational complexity of the verification algorithm is $\mathsf{poly}(\lambda)(|x|+|w|)^{o(1)}$. Before this work, such a conversion was only known if we additionally assume the existence of a NIZK.
Along the way of obtaining our result, we give a generic compiler to upgrade a NIZK for all NP languages with non-adaptive zero-knowledge to one with adaptive zero-knowledge. Though this can be shown by carefully combining known results, to the best of our knowledge, no explicit proof of this generic conversion has been presented.
Based on this result, we also give a generic conversion from a SNARG to a zero-knowledge SNARG assuming the existence of CPA secure public-key encryption. For this conversion, we require a SNARG to have efficient verification, i.e., the computational complexity of the verification algorithm is $\mathsf{poly}(\lambda)(|x|+|w|)^{o(1)}$. Before this work, such a conversion was only known if we additionally assume the existence of a NIZK.
Along the way of obtaining our result, we give a generic compiler to upgrade a NIZK for all NP languages with non-adaptive zero-knowledge to one with adaptive zero-knowledge. Though this can be shown by carefully combining known results, to the best of our knowledge, no explicit proof of this generic conversion has been presented.
Yuncong Hu, Sam Kumar, Raluca Ada Popa
Data-sharing systems are often used to store sensitive data. Both academia and industry have proposed numerous solutions to protect user privacy and data integrity from a compromised server. Practical state-of-the-art solutions, however, use weak threat models based on centralized trustthey assume that part of the server will remain uncompromised, or that the adversary will not perform active attacks. We propose Ghostor, a data-sharing system that, using only decentralized trust, (1) hides user identities from the server, and (2) allows users to detect server-side integrity violations. To achieve (1), Ghostor avoids keeping any per-user state at the server, requiring us to redesign the system to avoid common paradigms like per-user authentication and user-specific mailboxes. To achieve (2), Ghostor develops a technique called verifiable anonymous history. Ghostor leverages a blockchain rarely, publishing only a single hash to the blockchain for the entire system once every epoch. We measured that Ghostor incurs a 45x throughput overhead compared to an insecure baseline. Although significant, Ghostor's overhead may be worth it for security- and privacy-sensitive applications.
Saeid Esmaeilzade, Ziba Eslami, Nasrollah Pakniat
Oblivious transfer (OT) is a fundamental problem in cryptography where it is required that a sender transfers one of potentially many pieces of information to a receiver and at the same time remains oblivious as to which piece has been transferred. After its introduction back in 1981 by Rabin, some more useful variations of OT appeared in the literature such as $OT^1_2$, $OT^1_n$, and $OT^k_n$. In 2015, a very simple and efficient OT protocol was proposed by Chou and Orlandi. Later, Hauck and Loss proposed an improved protocol and proved it to be fully UC-secure under the CDH assumption. Our goal in this paper is to extend the results of Hauck and Loss and propose a simple generic construction to build $OT^1_2$ and in general $OT^1_n$. The machinery we employ is homomorphic encryption. We instantiate our construction with some well known homomorphic encryption schemes such as RSA, Paillier, and NTRU to obtain concrete OT protocols. We further provide the details of the proof of the UC-security of our generic construction.
Ward Beullens, Shuichi Katsumata, Federico Pintore
We construct efficient ring signatures from isogeny and lattice assumptions. Our ring signatures are
based on a logarithmic OR proof for group actions. We then instantiate this group action by either the
CSIDH group action or an MLWE-based group action to obtain our isogeny-based or lattice-based ring
signature scheme respectively. Even though this OR proof has a binary challenge space and therefore
needs to be repeated a linear number of times, the size of our ring signatures is small and scales better
with the ring size N than previously known post-quantum ring signatures. We also construct linkable
ring signatures that are almost as efficient as the non-linkable variant.
The signature size of our isogeny-based construction is an order of magnitude smaller than all previously
known logarithmic post-quantum ring signatures, but is relatively slow (e.g. 5.5 KB signatures and 79 s
signing time for rings with 8 members). In comparison, our lattice-based construction is much faster, but
has larger signatures (e.g. 30 KB signatures and 90 ms signing time for the same ring size). For small
ring sizes our lattice-based ring signatures are slightly larger than state-of-the-art schemes, but they are
smaller for ring sizes larger than $N \approx 1024$.
Liliya Kraleva, Nikolai L. Manev, Vincent Rijmen
In this paper we study two-round key-alternating block ciphers with round function $f(x)=x^{(2^t+1)2^s},$ where $t,s$ are positive integers. An algorithm to compute the distribution weight with respect to input and output masks is described. In the case $t=1$ the correlation distributions in dependence on input and output masks are completely determined for arbitrary pairs of masks. We investigate with more details the case $f(x)=x^3$ and fully derive and classify the distributions, proving that there are only 5 possible values for the correlation for any pair of masks.
Ignacio Cascudo, Bernardo David
In this paper we present ALBATROSS, a family of multiparty randomness generation protocols with guaranteed output delivery and public verification that allows to trade off corruption tolerance for a much improved amortized computational complexity. Our basic stand alone protocol is based on publicly verifiable secret sharing (PVSS) and is secure under in the random oracle model under the decisional Diffie-Hellman (DDH) hardness assumption.
We also address the important issue of constructing Universally Composable randomness beacons, showing two UC versions of Albatross: one based on simple UC NIZKs and another one based on novel efficient ``designated verifier'' homomorphic commitments. Interestingly this latter version can be instantiated from a global random oracle under the weaker Computational Diffie-Hellman (CDH) assumption.
An execution of ALBATROSS with $n$ parties, out of which up to $t=(1/2-\epsilon)\cdot n$ are corrupt for a constant $\epsilon>0$, generates $\Theta(n^2)$ uniformly random values,
requiring in the worst case
an amortized cost per party of $\Theta(\log n)$ exponentiations per random value.
We significantly improve on the SCRAPE protocol (Cascudo and David, ACNS 17), which
required $\Theta(n^2)$ exponentiations per party to generate one uniformly random value. This is mainly achieved via two techniques: first, the use of packed Shamir secret sharing for the PVSS; second, the use of linear $t$-resilient functions (computed via a Fast Fourier Transform-based algorithm) to improve the randomness extraction.
Pascal Lafourcade, Marius Lombard-Platet
A blockchain is designed to be a self-sufficient decentralised ledger:
a peer verifying the validity of past transactions only needs to
download the blockchain (the ledger) and nothing else. However, it
might be of interest to make two different blockchains interoperable,
i.e., to allow one to transmit information from one blockchain to
another blockchain. In this paper, we give a formalisation of this
problem, and we prove that blockchain interoperability is impossible
according to the classical definition of a blockchain. Under a weaker
definition of blockchain, we demonstrate that two blockchains are
interoperable is equivalent to creating a `2-in-1' blockchain
containing both ledgers, thus limiting the theoretical interest of
making interoperable blockchains in the first place. We also observe
that all practical existing interoperable blockchain frameworks work
indeed by exchanging already created tokens between two blockchains
and not by offering the possibility to transfer tokens from one
blockchain to another one, which implies a modification of the balance
of total created tokens on both blockchains. It confirms that having
interoperability is only possible by creating a `2-in-1'
blockchain containing both ledgers.
Henri Aare, Peter Vitols
The distributed ledger technology has been widely hailed as the break- through technology. It has realised a great number of application scenarios, and improved workflow of many domains. Nonetheless, there remain a few major concerns in adopting and deploying the distributed ledger technology at scale. In this white paper, we tackle two of them, namely the throughput scalability and confidentiality protection for transactions. We learn from the existing body of research, and build a scale-out blockchain plat- form that champions privacy called RVChain. RVChain takes advantage of trusted execution environment to offer confidentiality protection for transactions, and scale the throughput of the network in proportion with the number of network participants by supporting parallel shadow chains.
Jeff Burdges, Alfonso Cevallos, Peter Czaban, Rob Habermeier, Syed Hosseini, Fabio Lama, Handan Kilinc Alper, Ximin Luo , Fatemeh Shirazi, Alistair Stewart, Gavin Wood
In this paper we describe the design components of the heterogenous multi-chain protocol Polkadot and explain how these components help Polkadot address some of the existing shortcomings of blockchain technologies.
At present, a vast number of blockchain projects have been introduced and employed with various features that are not necessarily designed to work with each other. This makes it difficult for users to utilise a large number of applications on different blockchain projects. Moreover, with the increase in number of projects the security that each one is providing individually becomes weaker.
Polkadot aims to provide a scalable and interoperable framework for multiple chains with pooled security that is achieved by the collection of components described in this paper.
Kyungbae Jang, Seungjoo Choi, Hyeokdong Kwon, Hwajeong Seo
Grover search algorithm reduces the security level of symmetric key cryptography with $n$-bit secret key to $O(2^{n/2})$.
In order to evaluate the Grover search algorithm, the target block cipher should be implemented in quantum circuits.
Recently, many research works evaluated required quantum resources of AES block ciphers by optimizing the expensive substitute layer. However, only few works devoted to ARX-based lightweight block ciphers, which are active research area.
In this paper, we present optimized implementations of SPECK 32/64 and SPECK 64/128 block ciphers for quantum computers. To the best of our knowledge, this is the first implementation of SPECK in quantum circuits.
Primitive operations, including addition, rotation, and exclusive-or, for SPECK block cipher are finely optimized to achieve the optimal quantum circuit, in terms of qubits, Toffoli gate, CNOT gate, and X gate.
The proposed method can be applied to other ARX-based lightweight block ciphers, such as LEA, HIGHT and CHAM block ciphers.
Anne Broadbent, Raza Ali Kazmi
An indistinguishability obfuscator is a probabilistic polynomial-time algorithm that takes a circuit as input and outputs a new circuit that has the same functionality as the input circuit, such that for any two circuits of the same size that compute the same function, the outputs of the indistinguishability obfuscator are computationally indistinguishable. Here, we study schemes for indistinguishability obfuscation for quantum circuits ($qi\mathcal{O}$), which consists in a procedure that takes as input a quantum circuit and outputs a quantum state, together with a quantum circuit, which together can be used to evaluate the original quantum circuit on any quantum input; the security guarantee being that for two quantum circuits of the same size that have the same input-output behaviour, the outputs of the obfuscator are computationally indistinguishable. Our main result provides a construction for $qi\mathcal{O}$, where the size of the output of the obfuscator is exponential in the number of non-Clifford (T gates), which means that the construction is efficient as long as the number of T gates is logarithmic in the circuit size.
Jeffrey Burdges, Luca De Feo
We introduce a new primitive named Delay Encryption, and give an
efficient instantation based on isogenies of supersingular curves
and pairings.
Delay Encryption is related to Time-lock Puzzles and Verifiable
Delay Functions, and can be roughly described as "identity based
encryption with slow derived private key issuance".
It has several applications in distributed protocols, such as
sealed bid Vickrey auctions and electronic voting.
We give an instantiation of Delay Encryption by modifying Boneh and Frankiln's IBE scheme, where we replace the master secret key by a long chain of isogenies, as in the isogeny VDF of De Feo, Masson, Petit and Sanso. Similarly to the isogeny-based VDF, our Delay Encryption requires a trusted setup before parameters can be safely used; our trusted setup is identical to that of the VDF, thus the same parameters can be generated once and shared for many executions of both protocols, with possibly different delay parameters.
We also discuss several topics around delay protocols based on isogenies that were left untreated by De Feo et al., namely: distributed trusted setup, watermarking, and implementation issues.
We give an instantiation of Delay Encryption by modifying Boneh and Frankiln's IBE scheme, where we replace the master secret key by a long chain of isogenies, as in the isogeny VDF of De Feo, Masson, Petit and Sanso. Similarly to the isogeny-based VDF, our Delay Encryption requires a trusted setup before parameters can be safely used; our trusted setup is identical to that of the VDF, thus the same parameters can be generated once and shared for many executions of both protocols, with possibly different delay parameters.
We also discuss several topics around delay protocols based on isogenies that were left untreated by De Feo et al., namely: distributed trusted setup, watermarking, and implementation issues.
Anish Saxena, Biswabandan Panda
Flush based cache attacks like Flush+Reload and Flush+Flush are one of the highly effective cache attacks. In fact, the Flush+Flush attack is stealthy too. Most of the flush based attacks provide high accuracy in controlled environments where attacker and victim are the only two processes that are running on a system by sharing OS pages. However, we observe that these attacks lose their effectiveness (prone to low accuracy) on a noisy multi-core system where co-running applications run along with the attacker and the victim. Two root causes for the varying accuracy of flush based attacks are: (i) the dynamic nature of core frequencies that fluctuate depending on the system load, and (ii) the relative placement of victim and attacker threads in the processor (same logical core, same physical core, different physical cores). The variation in the processor frequencies and placement of threads affect one of the critical attack steps (the cache latency calibration step as the latency threshold set to distinguish a cache hit from a miss becomes inaccurate).
We propose a set of refinements (DABANGG refinements) to make existing flush attacks resilient to frequency changes and thread placement in the processor, and therefore system noise. We propose refinements to pre-attack and attack steps and make it conscious about the latency change. We evaluate DABANGG-enabled Flush+Reload and Flush+Flush attacks (DABANGG+Flush+Reload and DABANGG+Flush+Flush, respectively) against the standard Flush+Reload and Flush+Flush attacks across four scenarios for eight different combinations of system noise capturing different levels of compute, memory, and I/O noise intensities: (i) a side-channel attack based on user input (single-character and multi-character key-logging), (ii) a side-channel on AES, (iii) a covert-channel, and a (iv) transient execution attack in the form the Spectre attack.
For all the scenarios, DABANGG+Flush+Reload and DABANGG+Flush+Flush outperform the standard Flush+Reload and Flush+Flush attacks in terms of F1-score and accuracy.
Erik-Oliver Blass, Florian Kerschbaum
Efficient multi-party protocols are commonly composed of different
sub-protocols, combining techniques such as homomorphic encryption,
secret or Boolean sharing, and garbled circuits. To ensure security
of the composed protocol against malicious adversaries, one needs to
prove in zero-knowledge that conversions between individual
techniques are correct. However, efficient ZK proofs for conversion
between fully homomorphic encryption and garbled circuits are still
an open problem. In this paper, we design new efficient proofs and
apply them to a new class of multi-party protocols which themselves
are composed out of two-party protocols. We integrate both types of
compositions, compositions of fully homomorphic encryption with
garbled circuits and compositions of multi-party protocols from
two-party protocols. As a result, we can construct
communication-efficient protocols for special problems with
malicious security. To show the usefulness of this approach, we
give an example scheme for private set analytics, i.e., private set
disjointness. This scheme enjoys lower communication complexity
than a solution based on generic multi-party protocols and lower
computation cost than fully homomorphic encryption. So, our design
is more suitable for deployments in wide-area networks, such as the
Internet, with many participants or problems with circuits of
moderate or high multiplicative depth.
Pedro Branco, Nico Döttling, Paulo Mateus
Oblivious Linear Evaluation (OLE) is a simple yet powerful cryptographic primitive which allows a sender, holding an affine function $f(x)=a+bx$ over a finite field, to let a receiver learn $f(w)$ for a $w$ of the receiver's choice. In terms of security, the sender remains oblivious of the receiver's input $w$, whereas the receiver learns nothing beyond $f(w)$ about $f$. In recent years, OLE has emerged as an essential building block to construct efficient, reusable and maliciously-secure two-party computation.
In this work, we present efficient two-round protocols for OLE based on the Learning with Errors (LWE) assumption. Our first protocol for OLE is secure against malicious unbounded receivers and semi-honest senders. The receiver's first message is reusable, meaning that it can be reused over several executions of the protocol, and it may carry information about a batch of inputs, and not just a single input. We then show how we can extend the above protocol to provide malicious security for both parties, albeit at the cost of reusability.
In this work, we present efficient two-round protocols for OLE based on the Learning with Errors (LWE) assumption. Our first protocol for OLE is secure against malicious unbounded receivers and semi-honest senders. The receiver's first message is reusable, meaning that it can be reused over several executions of the protocol, and it may carry information about a batch of inputs, and not just a single input. We then show how we can extend the above protocol to provide malicious security for both parties, albeit at the cost of reusability.
David Knichel, Pascal Sasdrich, Amir Moradi
Implementing cryptographic functions securely in the presence of physical adversaries is still a challenge although a lion's share of research in the physical security domain has been put in development of countermeasures. Among several protection schemes, masking has absorbed the most attention of research in both academic and industrial communities, due to its theoretical foundation allowing to provide proofs or model the achieved security level. In return, masking schemes are difficult to implement as the implementation process often is manual, complex, and error-prone. This motivated the need for formal verification tools that allow the designers and engineers to analyze and verify the designs before manufacturing.
In this work, we present a new framework to analyze and verify masked implementations against various security notions using different security models as reference. In particular, our framework - which directly processes the resulting gate-level netlist of a hardware synthesis - particularly relies on Reduced Ordered Binary Decision Diagrams (ROBDDs) and the concept of statistical independence of probability distributions. Compared to existing tools, our framework captivates due to its simplicity, accuracy, and functionality while still having a reasonable efficiency for many applications and common use-cases.
In this work, we present a new framework to analyze and verify masked implementations against various security notions using different security models as reference. In particular, our framework - which directly processes the resulting gate-level netlist of a hardware synthesis - particularly relies on Reduced Ordered Binary Decision Diagrams (ROBDDs) and the concept of statistical independence of probability distributions. Compared to existing tools, our framework captivates due to its simplicity, accuracy, and functionality while still having a reasonable efficiency for many applications and common use-cases.