IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
22 March 2018
Sam Kim, David J. Wu
      Non-interactive zero-knowledge (NIZK) proofs are fundamental to modern cryptography. Numerous NIZK constructions are known in both the random oracle and the common reference string (CRS) models. In the CRS model, there exist constructions from several classes of cryptographic assumptions such as trapdoor permutations, pairings, and indistinguishability obfuscation. Notably absent from this list, however, are constructions from standard lattice assumptions. While there has been partial progress in realizing NIZKs from lattices for specific languages, constructing NIZK proofs (and arguments) for all of NP from standard lattice assumptions remains open.
In this work, we make progress on this problem by giving the first construction of a multi-theorem NIZK argument for NP from standard lattice assumptions in the preprocessing model. In the preprocessing model, a (trusted) setup algorithm generates proving and verification keys. The proving key is needed to construct proofs and the verification key is needed to check proofs. In the multi-theorem setting, the proving and verification keys should be reusable for an unbounded number of theorems without compromising soundness or zero-knowledge. Existing constructions of NIZKs in the preprocessing model (or even the designated-verifier model) that rely on weaker assumptions like one-way functions or oblivious transfer are only secure in a single-theorem setting. Thus, constructing multi-theorem NIZKs in the preprocessing model does not seem to be inherently easier than constructing them in the CRS model.
We begin by constructing a multi-theorem preprocessing NIZK directly from context-hiding homomorphic signatures. Then, we show how to efficiently implement the preprocessing step using a new cryptographic primitive called blind homomorphic signatures. This primitive may be of independent interest. Finally, we show how to leverage our new lattice-based preprocessing NIZKs to obtain new malicious-secure MPC protocols purely from standard lattice assumptions.
  In this work, we make progress on this problem by giving the first construction of a multi-theorem NIZK argument for NP from standard lattice assumptions in the preprocessing model. In the preprocessing model, a (trusted) setup algorithm generates proving and verification keys. The proving key is needed to construct proofs and the verification key is needed to check proofs. In the multi-theorem setting, the proving and verification keys should be reusable for an unbounded number of theorems without compromising soundness or zero-knowledge. Existing constructions of NIZKs in the preprocessing model (or even the designated-verifier model) that rely on weaker assumptions like one-way functions or oblivious transfer are only secure in a single-theorem setting. Thus, constructing multi-theorem NIZKs in the preprocessing model does not seem to be inherently easier than constructing them in the CRS model.
We begin by constructing a multi-theorem preprocessing NIZK directly from context-hiding homomorphic signatures. Then, we show how to efficiently implement the preprocessing step using a new cryptographic primitive called blind homomorphic signatures. This primitive may be of independent interest. Finally, we show how to leverage our new lattice-based preprocessing NIZKs to obtain new malicious-secure MPC protocols purely from standard lattice assumptions.
21 March 2018
San Jose, USA, 9 January - 11 January 2019
      Event date: 9 January to 11 January 2019
          
  Be'er Sheva, Israel, 21 June - 22 June 2018
      Event date: 21 June to 22 June 2018
Submission deadline: 25 March 2018
Notification: 8 April 2018
  Submission deadline: 25 March 2018
Notification: 8 April 2018
15 March 2018
Hong Kong, China, 27 August - 29 August 2018
      Event date: 27 August to 29 August 2018
Submission deadline: 15 April 2018
Notification: 1 June 2018
  Submission deadline: 15 April 2018
Notification: 1 June 2018
14 March 2018
Borching Su
      A public blockchain is proposed in an attempt to enable the coin holders to participate in verifying mathematical theorems for public access.
Incentives are designed to encourage any party to contribute their knowledge by buying tokens of mathematical propositions that they believe are true.
The proposed blockchain is a platform for people to exchange their belief in mathematical propositions.
An implementation of this blockchain proposal, once established, will provide the general public an easy and instant access to reliable knowledge without having to read difficult proofs or having to blindly trust a small number of experts.
Conversely, experts from various fields may find it be much easier for making their work appreciated by more people, leading to a better impact.
According to the incentive inherently provided by the blockchain, they can even earn significantly if they do prove some theorems that were not previously known by the blockchain.
Foundations who are interested in the validity of a particular proposition not yet explicitly recorded on the blockchain can donate a fund, which will distribute to experts who contribute positive efforts toward solving the specified problems.
Only the people who erroneously create or buy tokens of a proposition that is eventually proven false will lose money. 
A reference design of the proposed blockchain that attempts to achieve the above-mentioned goal is described and reasoned.
          
  Douglas R. Stinson
      The purpose of this paper is to describe and analyze the Cayley-Purser algorithm, which is a public-key cryptosystem proposed by Flannery in 1999. I will present two attacks on it, one of which is apparently new. I will also examine a variant of the Cayley-Purser algorithm that was patented by Slavin in 2008, and show that it is also insecure.
          
  13 March 2018
Derek Leung, Adam Suhl, Yossi Gilad, Nickolai Zeldovich
      Decentralized cryptocurrencies rely on participants to keep track of the state of the system in order to verify new transactions. As the number of users and transactions grows, this requirement places a significant burden on the users, as they need to download, verify, and store a large amount of data in order to participate.
Vault is a new cryptocurrency designed to minimize these storage and bootstrapping costs for participants. Vault builds on Algorands proof-of-stake consensus protocol and uses several techniques to achieve its goals. First, Vault decouples the storage of recent transactions from the storage of account balances, which enables Vault to delete old account state. Second, Vault allows sharding state across participants in a way that preserves strong security guarantees. Finally, Vault introduces the notion of stamping certificates that allow a new client to catch up securely and efficiently in a proof-of-stake system without having to verify every single block.
Experiments with a prototype implementation of Vaults data structures shows that Vault reduces the bandwidth cost of joining the network as a full client by 99.7% compared to Bitcoin and 90.5% compared to Ethereum when downloading a ledger containing 500 million transactions.
  Vault is a new cryptocurrency designed to minimize these storage and bootstrapping costs for participants. Vault builds on Algorands proof-of-stake consensus protocol and uses several techniques to achieve its goals. First, Vault decouples the storage of recent transactions from the storage of account balances, which enables Vault to delete old account state. Second, Vault allows sharding state across participants in a way that preserves strong security guarantees. Finally, Vault introduces the notion of stamping certificates that allow a new client to catch up securely and efficiently in a proof-of-stake system without having to verify every single block.
Experiments with a prototype implementation of Vaults data structures shows that Vault reduces the bandwidth cost of joining the network as a full client by 99.7% compared to Bitcoin and 90.5% compared to Ethereum when downloading a ledger containing 500 million transactions.
Michael Raskin, Mark Simkin
      In this work, we present a new approach to constructing Oblivious RAM (ORAM).
Somewhat surprisingly, and despite the large amount of research interest that ORAM has received, all existing ORAM constructions are based on a handful of conceptually different approaches.
We believe it is of theoretical and practical interest to explore new ways to construct this primitive. 
Our first construction is conceptually extremely simple and has a worst-case bandwidth overhead of $\mathcal{O}\left(\sqrt{N} \frac{\log{N}}{\log{\log{N}}}\right)$, where N is the number of data blocks.
Our main construction, Lookahead ORAM, has a worst-case bandwidth overhead of $\mathcal{O}\left(\sqrt{N}\right)$ and an additive storage overhead of $\sqrt{2N}$, which is the smallest concrete storage overhead among all existing ORAM constructions with sublinear worst-case bandwidth overhead. A small storage overhead is particularly beneficial in outsourced storage settings, where data owners have to pay their storage provider for the amount of storage they consume.
In addition to having a small storage overhead, Lookahead ORAM has perfect correctness, i.e. every operation on the ORAM data structure succeeds with probability 1 and, assuming the client stores some small stash of data locally, an online bandwidth overhead of $\mathcal{O}\left(1\right)$ without server-side computation. In comparison with prior work that has sublinear worst-case bandwidth overhead, Lookahead ORAM is asymptotically the most efficient ORAM with perfect correctness.
  Our first construction is conceptually extremely simple and has a worst-case bandwidth overhead of $\mathcal{O}\left(\sqrt{N} \frac{\log{N}}{\log{\log{N}}}\right)$, where N is the number of data blocks.
Our main construction, Lookahead ORAM, has a worst-case bandwidth overhead of $\mathcal{O}\left(\sqrt{N}\right)$ and an additive storage overhead of $\sqrt{2N}$, which is the smallest concrete storage overhead among all existing ORAM constructions with sublinear worst-case bandwidth overhead. A small storage overhead is particularly beneficial in outsourced storage settings, where data owners have to pay their storage provider for the amount of storage they consume.
In addition to having a small storage overhead, Lookahead ORAM has perfect correctness, i.e. every operation on the ORAM data structure succeeds with probability 1 and, assuming the client stores some small stash of data locally, an online bandwidth overhead of $\mathcal{O}\left(1\right)$ without server-side computation. In comparison with prior work that has sublinear worst-case bandwidth overhead, Lookahead ORAM is asymptotically the most efficient ORAM with perfect correctness.
Patrick Longa
      We discuss several post-quantum authenticated key exchange protocols based on the supersingular isogeny problem. Leveraging the design of the well-studied schemes by Krawczyk (2003), Boyd et al. (2008), Fujioka et al. (2013), Krawczyk and Wee (2015), and others, we show how to use the Supersingular Isogeny Diffie-Hellman (SIDH) and 
Supersingular Isogeny Key Encapsulation (SIKE) protocols as basic building blocks to construct efficient and flexible authenticated key exchange schemes featuring different functionalities and levels of security. 
This note is also intended to be a ``gentle'' introduction to supersingular isogeny based cryptography, and its most relevant constructions, for protocol designers and cryptographers.
  This note is also intended to be a ``gentle'' introduction to supersingular isogeny based cryptography, and its most relevant constructions, for protocol designers and cryptographers.
Steven D. Galbraith
      We survey authenticated key exchange (AKE) in the context of supersingular isogeny Diffie-Hellman key exchange (SIDH). We discuss different approaches to achieve authenticated key exchange, and survey the literature. We explain some challenges that arise in the SIDH setting if one wants to do a ``Diffie-Hellman-like'' AKE, and present several candidate authenticated key exchange protocols suitable for SIDH. We also discuss some open problems.
          
  Ayesha Khalid, James Howe, Ciara Rafferty, Francesco Regazzoni, Maire O'Neill
      Lattice-based cryptography, one of the leading
candidates for post-quantum security, relies heavily on discrete
Gaussian samplers to provide necessary uncertainty, obfuscating
computations on secret information. For reconfigurable hardware,
the cumulative distribution table (CDT) scheme has previously
been shown to achieve the highest throughput and the
smallest resource utilisation, easily outperforming other existing
samplers. However, the CDT sampler does not scale well. In
fact, for large parameters, the lookup tables required are far too
large to be practically implemented. This research proposes a
hierarchy of multiple smaller samplers, extending the Gaussian
convolution lemma to compute optimal parameters, where the
individual samplers require much smaller lookup tables. A large
range of parameter sets, covering encryption, signatures, and
key exchange are evaluated. Hardware-optimised parameters are
formulated and a practical implementation on Xilinx Artix-
7 FPGA device is realised. The proposed sampling designs
demonstrate promising performance on reconfigurable hardware,
even for large parameters, that were otherwise thought infeasible.
          
  Vienna, Austria, 20 August - 24 August 2018
      Event date: 20 August to 24 August 2018
Submission deadline: 10 May 2018
Notification: 20 May 2018
  Submission deadline: 10 May 2018
Notification: 20 May 2018
Amsterdam, Netherlands, 13 September 2018
      Event date: 13 September 2018
Submission deadline: 25 May 2018
Notification: 22 June 2018
  Submission deadline: 25 May 2018
Notification: 22 June 2018
12 March 2018
Daan Leermakers, Boris Skoric
      Quantum Key Recycling aims to re-use the keys employed in quantum encryption and quantum authentication schemes. We consider QKR protocols where classical information is embedded in qubit states. A partial security analysis for such protocols was done in [LS2018].
In the current paper we  introduce a number of small protocol modifications and provide a security proof. Our proof is based on a computation of the statistical distance between the real quantum state of the system and a state in which the keys are completely secure. This is a non-asymptotic result. It also determines how much privacy amplification is needed as a function of the bit error rate. It turns out that less privacy amplification is needed than suggested by the min-entropy analysis in [LS2018].
          
  Seyyed Mahdi Sedaghat, Mohammad Hassan Ameri, Mahshid Delavar, Javad Mohajeri, Mohammad Reza Aref
      With regards to the development of modern power systems, Smart Grid (SG) as an intelligent generation of electricity networks has been faced with a tremendous attention. Fine-grained data sharing in SG plays a vital role in efficiently managing data flow in the SG. As these data commonly contain sensitive information, design of the secure and efficient privacy-preserving schemes for such networks with plenty of resource constrained devices is one of the most controversial issues. In this paper, we propose a secure Ciphertext-Policy Attribute-Based SignCryption (CP-ABSC) scheme which simultaneously provides the authenticity and privacy of the users by enforcing an arbitrary access control policy on encrypted data. Since the number of required pairings in the signcryption and designcryption algorithms are independent to the number of the involved attributes, the computational overhead is reduced in comparison with the existing schemes in the literature. In addition, we formally prove that the unforgeability and indistinguishability of the proposed scheme are reducible to the well-known hardness assumption of the q-Bilinear Diffie-Hellman Exponent (q-BDHE) problem. Moreover, we show that embedding a Physical Unclonable Function (PUF) in each smart meter will significantly reduce the storage overhead of the protocol and secure it against non-volatile memory attackers.
          
  Joachim Zahnentferner
      Cryptocurrencies are historically divided in two broad groups with respect to the style of transactions that they accept. In the account-based style, each address is seen as an account with a balance, and transactions are transfers of value from one account to another. In the UTXO-based style, transactions inductively spend outputs generated by previous trans- actions and create new unspent outputs, and there is no intrinsic notion of account associated with an address. Each style has advantages and disadvantages. This paper formally defines: the two styles; translations that allow to simulate one style by the other; new transaction types that allow both styles of transactions to co-exist on the same ledger; and a new transaction type that combines features from both styles.
          
  1 January - 30 June 2019
      Event date: 1 January to 30 June 2019
Submission deadline: 31 March 2018
          
  Submission deadline: 31 March 2018
Bucharest, Romania, 8 November - 9 November 2018
      Event date: 8 November to 9 November 2018
Submission deadline: 17 September 2018
Notification: 16 October 2018
  Submission deadline: 17 September 2018
Notification: 16 October 2018
Seoul, Korea, 28 November - 30 November 2018
      Event date: 28 November to 30 November 2018
Submission deadline: 29 August 2018
Notification: 17 October 2018
  Submission deadline: 29 August 2018
Notification: 17 October 2018
09 March 2018
Dan Boneh, Saba Eskandarian, Ben Fisch
      Group signatures are used extensively for privacy in anonymous credentials schemes and in real-world systems for hardware enclave attestation. As such, there is a strong interest in making these schemes post-quantum secure. In this paper we initiate the study of group signature schemes built only from symmetric primitives, such as hash functions and PRFs, widely regarded as the safest primitives for post-quantum security. We present two constructions in the random oracle model. The first is a group signature scheme satisfying the EPID group signature syntax and security definitions needed for private hardware attestation used in Intels SGX. The second achieves significantly shorter signatures for many applications, including the use case of remote hardware attestation. While our group signatures for attestation are longer than standard (nongroup) post-quantum signatures, they are short enough for applications where the data being signed is large, such as analytics on large private data sets, or streaming media to a trusted display. We evaluate several instantiations of our schemes so that the costs and benefits of these constructions are clear. Along the way we also give improvements to the zero-knowledge Merkle inclusion proofs of Derler et al. (2017).
          
  