International Association for Cryptologic Research

IACR News Central

Get an update on changes of the IACR web-page here. For questions, contact newsletter (at) You can also receive updates via:

To receive your credentials via mail again, please click here.

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

18:11 [Pub][ePrint] Foundations of Reactive Garbling Schemes, by Jesper Buus Nielsen and Samuel Ranellucci

  Garbled circuits is a cryptographic technique, which has been used among other


for the construction of two and three-party secure computation, private

function evaluation and secure outsourcing. Garbling schemes is a primitive

which formalizes the syntax and security properties of garbled circuits.

We define a generalization of garbling schemes called \\emph{reactive garbling


We consider functions and garbled functions taking multiple inputs and giving

multiple outputs.

Two garbled functions can be linked together:

an encoded output of one garbled function can be transformed

into an encoded input of the other garbled function without communication

between the parties.

Reactive garbling schemes also allow partial evaluation of garbled functions

even when only some of the encoded inputs are provided.

It is possible to further evaluate the linked garbled functions

when more garbled inputs become available.

It is also possible to later garble more functions and link them to the

ongoing garbled evaluation.

We provide rigorous definitions for reactive garbling schemes.

We define a new notion of security for reactive garbling schemes called


We provide both simulation based and indistinguishability based notions of


We also show that the simulation based notion of security

implies the indistinguishability based notion of security.

We present an instantiation of reactive garbling schemes. We present an

application of reactive garbling schemes to reactive

two-party computation secure against a malicious adversary. We demonstrate

how garbling schemes can be used to give abstract black-box descriptions and

proof of several advanced applications of garbled circuits in the literature,

including Minilego

and Lindell\'s forge-and-loose technique.

18:11 [Pub][ePrint] On the Complexity of Additively Homomorphic UC Commitments, by Tore Kasper Frederiksen and Thomas P. Jakobsen and Jesper Buus Nielsen and Roberto Trifiletti

  We present a new constant round additively homomorphic commitment scheme with (amortized) computational and communication complexity linear in the size of the string committed to. Our scheme is based on the non-homomorphic commitment scheme of Cascudo \\emph{et al.} presented at PKC 2015. However, we manage to add the additive homomorphic property, while at the same time reducing the constants. In fact, when opening a large enough batch of commitments we achieve an amortized communication complexity converging to the length of the message committed to, i.e., we achieve rate $1$ as the commitment protocol by Garay \\emph{et al.} from Eurocrypt 2014. A main technical improvement over the scheme mentioned above, and other schemes based on using error correcting codes for UC commitment, we develop a new technique which allows to based the extraction property on erasure decoding as opposed to error correction. This allows to use a code with significantly smaller minimal distance and allows to use codes without efficient decoding.

Our scheme only relies on standard assumptions. Specifically we require a pseudorandom number generator, a linear error correcting code and an ideal oblivious transfer functionality. Based on this we prove our scheme secure in the Universal Composability (UC) framework against a static and malicious adversary corrupting any number of parties.

On a practical note, our scheme improves significantly on the non-homomorphic scheme of Cascudo \\emph{et al.} Based on their observations in regards to efficiency of using linear error correcting codes for commitments we conjecture that our commitment scheme might in practice be more efficient than all existing constructions of UC commitment, even non-homomorphic constructions and even constructions in the random oracle model. In particular, the amortized price of computing one of our commitments is less than that of evaluating a hash function once.

18:11 [Pub][ePrint] Cliptography: Clipping the Power of Kleptographic Attacks, by Alexander Russell and Qiang Tang and Moti Yung and Hong-Sheng Zhou

  Kleptography, originally introduced by Young and Yung [Crypto \'96],

studies how to steal information securely and subliminally from cryptosystems.

Secure cryptosystems can be broken if they are maliciously implemented

since the adversary may have some backdoors embedded in the implementation.

Although kleptographic attacks have been investigated about two decades ago,

for too long the possibility of kleptographic attacks have been dismissed and

been viewed only as a far-fetched theoretical concept.

This is dramatically changed when real-world examples were recently revealed

by Edward Snowden, demonstrating that such deliberate attacks

(directly inspired by the original work) exist and probably have been used for massive surveillance. In light of such possible failures of basic protective technology,

the security community started to seriously re-investigate this important issue: one notable example is the work of

Bellare, Paterson, and Rogaway [Crypto \'14], which initiated the formal studies of attacks on symmetric key encryption algorithms.

Motivated by the original examples of subverting key generation algorithms in the kleptography papers from Young and Yung [Crypto \'96, Eurocrypt \'97], we initiate the study of cryptography in the case that {\\em all} algorithms are subject to kleptographic attacks---we call it {\\bf cliptography}. As a first step, we formally study the fundamental primitives of one-way function and trapdoor one-way function in this complete subversion model. And more interesting, we investigate the general immunization strategy to clip the power of kleptographic subversions; concretely, we propose a general framework for sanitizing the (trapdoor) one-way function generation algorithm by hashing the function index, and prove that such procedure indeed destroys the connection between a subverted function generation procedure and any possible backdoor. Along the way, we propose a split program model for practical deployment.

We then examine the applications of (trapdoor) one way function secure in the complete subversion model in two ways. First we consider to build ``higher level\" primitives via black-box reductions. In particular, we consider how to use our trapdoor one-way function to defend against key generation sabotage, and showcase a digital signature scheme that preserves existential unforgeability when {\\em all} algorithms (including key generation, which was not considered to be under attack before) are subject to kleptographic attacks.

Also we demonstrate that the classic Blum-Micali pseudorandom generator (PRG) using our ``unforgeable\" one-way function yields a backdoor-free PRG. Second, we generalize our immunizing technique for one way functions, and

propose a new public immunization strategy to randomize the public parameters of a (backdoored) PRG. Since the previous result by Dodis, Ganesh, Golovnev, Juels, and Ristenpart~[Eurocrypt \'15] requires an honestly generated random key, construction of secure PRG in the complete subversion model was also open until our paper.

18:11 [Pub][ePrint] Novel algorithms and hardware architectures for Montgomery Multiplication over GF(p), by Miguel Morales Sandoval and Arturo Diaz Perez

  This report describes the design and implementation results in FPGAs of a scalable hardware architecture for computing modular multiplication in prime fields GF($p$), based on the Montgomery multiplication (MM) algorithm. Starting from an existing digit-serial version of the MM algorithm, a novel {\\it digit-digit} based MM algorithm is derived and two hardware architectures that compute that algorithm are described. In the proposed approach, the input operands (multiplicand, multiplier and modulus) are represented using as radix $\\beta = 2^k$. Operands of arbitrary size can be multiplied with modular reduction using almost the same hardware since the multiplier\'s kernel module that performs the modular multiplication depends only on $k$. The novel hardware architectures proposed in this paper were verified by modeling them using VHDL and implementing them in the Xilinx FPGAs Spartan and Virtex5. Design trade-offs are analyzed considering different operand sizes commonly used in cryptography and different values for $k$. The proposed designs for MM are well suited to be implemented in modern FPGAs, making use of available dedicated multiplier and memory blocks reducing drastically the FPGA\'s standard logic while keeping an acceptable performance compared with other implementation approaches. From the Virtex5 implementation, the proposed MM multiplier reaches a throughput of 242Mbps using only 219 FPGA slices and achieving a 1024-bit modular multiplication in 4.21$\\mu$secs.

18:11 [Pub][ePrint] On the Security of a Self-healing Group Key Distribution Scheme, by Yandong Zheng, Hua Guo

  Recently, in Journal of Security and Communication Networks (5(12):1363-1374, DOI: 10.1002/sec.429), Wang et al. proposed a group key distribution scheme with self-healing property for wireless networks in which resource is constrained. They claimed that their key distribution scheme satisfies forward security, backward security and can resist collusion attack. Unfortunately, we found some security flaws in their scheme. In this paper, we present a method to attack this scheme. The attack illustrates that this scheme does not satisfy forward security, which also directly breaks the collusion resistance capability.

18:11 [Pub][ePrint] Chosen IV Cryptanalysis on Reduced Round ChaCha and Salsa, by Subhamoy Maitra

  Recently, ChaCha20 (the stream cipher ChaCha with 20 rounds) is in the process of being a standard and thus it attracts serious interest in cryptanalysis. The most significant effort to analyse Salsa and ChaCha had been explained by Aumasson et al long back (FSE 2008) and further, only minor improvements could be achieved. In this paper, first we revisit the work of Aumasson et al to provide a clearer insight of the existing attack (2^{248} complexity for ChaCha7, i.e., 7 rounds) and showing certain improvements (complexity around 2^{243}) by exploiting additional Probabilistic Neutral Bits. More importantly, we describe a novel idea that explores proper choice of IVs corresponding to the keys, for which the complexity can be improved further (2^{239}). The choice of IVs corresponding to the keys is the prime observation of this work. We systematically show how a single difference propagates after one round and how the differences can be reduced with proper choices of IVs. For Salsa too (Salsa20/8, i.e., 8 rounds), we get improvement in complexity, reducing it to 2^{245.5} from 2^{247.2} reported by Aumasson et al.

18:11 [Pub][ePrint] FURISC: FHE Encrypted URISC Design, by Ayantika Chatterjee and Indranil Sengupta

  This paper proposes design of a Fully Homomorphic Ultimate RISC (FURISC) based

processor. The FURISC architecture supports arbitrary operations on data encrypted

with Fully Homomorphic Encryption (FHE) and allows the execution of encrypted programs stored in processors with encrypted memory addresses. The FURISC architecture is designed based on fully homomorphic single RISC instructions like {\\em Subtract Branch if Negative} (SBN) and {\\em MOVE}. This paper explains how the use of FHE for designing the ultimate RISC processor is better in terms of security compared to previously proposed somewhat homomorphic encryption (SHE) based processor. The absence of randomization in SHE can lead to Chosen Plaintext Attacks (CPA) which is alleviated by the use of the FHE based Ultimate RISC instruction. Furthermore, the use of FURISC helps to develop fully homomorphic applications by tackling the {\\em termination} problem, which is a major obstacle for FHE processor design. The paper compares the MOVE based FHE RISC processor with the SBN alternative, and shows that the later is more efficient in terms of number of instructions and time required for the execution of a program. Finally, an SBN based FURISC processor simulator has been designed

to demonstrate that various algorithms can indeed be executed on data encrypted with FHE, providing a solution to the termination problem for FHE based processors and the CPA insecurity of SHE processors simultaneously.

18:11 [Pub][ePrint] Four Neighbourhood Cellular Automata as Better Cryptographic Primitives, by Jimmy Jose and Dipanwita RoyChowdhury

  Three-neighbourhood Cellular Automata (CA) are widely studied and accepted as suitable cryptographic primitive. Rule 30, a 3-neighbourhood CA rule, was proposed as an ideal candidate for cryptographic primitive by Wolfram. However, rule 30 was shown to be weak against Meier-Staffelbach attack. The cryptographic properties like diffusion and randomness increase with increase in neighbourhood radius and thus opens the avenue of exploring the cryptographic properties of 4-neighbourhood CA. This work explores whether four-neighbourhood CA can be a better cryptographic primitive. We construct a class of cryptographically suitable 4-neighbourhood nonlinear CA rules that resembles rule 30. One 4-neighbourhood nonlinear CA from this selected class is shown to be resistant against Meier-Staffelbach attack on rule 30, justifying the applicability of 4-neighbourhood CA as better cryptographic primitives.

18:11 [Pub][ePrint] Differential Privacy in distribution and instance-based noise mechanisms, by S├ębastien Canard and Baptiste Olivier

  In this paper, we introduce the notion of (\\epsilon,\\delta)-differential privacy in distribution, a strong version of the existing (\\epsilon,\\delta)-differential privacy, used to mathematically ensure that private data of an individual are protected when embedded into a queried database. In practice, such property is obtained by adding some relevant noise. Our new notion permits to simplify proofs of (\\epsilon,\\delta) privacy for mechanisms adding noise with a continuous distribution. As a first example, we give a simple proof that the Gaussian mechanism is (\\epsilon,\\delta)-differentially private in distribution.

Using differential privacy \\emph{in distribution}, we then give simple conditions for an instance-based noise mechanism to be (\\epsilon,\\delta)-differentially private. After that, we exploit these conditions to design a new (\\epsilon,\\delta)-differentially private instance-based noise algorithm. Compare to existing ones, our algorithm have a better accuracy when used to answer a query in a differentially private manner.

In particular, our algorithm does not require the computation of the so-called Smooth Sensitivity, usually used in instance-based noise algorithms, and which was proved to be NP hard to compute in some cases, namely statistics queries on some graphs. Our algorithm handles such situations and in particular some cases for which no instance-based noise mechanism were known to perform well.

18:11 [Pub][ePrint] Demystifying incentives in the consensus computer, by Loi Luu and Jason Teutsch and Raghav Kulkarni and Prateek Saxena

  Bitcoin and similar cryptocurrencies are a massive network of

computational devices that maintain the robutness and correctness of the

computation done in the network. Cryptocurrency protocols, including Bitcoin and the

more recent Ethereum system, offer an additional feature that allows

currency users to specify a ``script\'\' or contract which is executed

collectively (via a consensus protocol) by the network. This feature

can be used for many new applications of cryptocurrencies

beyond simple cash transaction. Indeed, several efforts to develop decentralized applications

are underway and recent experimental efforts have proposed to port a

Linux OS to such a decentralized computational platform.

In this work, we study the security of computations on a cryptocurrency

network. We explain why the correctness of such computations is susceptible to

attacks that both waste network resources of honest miners as well as lead to

incorrect results. The essence of our arguments stems from a deeper

understanding of the incentive-incompatibility of maintaining a correct

blockchain. We explain this via a ill-fated choice called the {\\em verifier\'s

dilemma}, which suggests that rational miners are well-incentivized to accept

an unvalidated blockchain as correct, especially in next-generation

cryptocurrencies such as Ethereum that are Turing-complete. To explain which

classes of computation can be computed securely, we formulate a model of

computation we call the consensus verifiability. We propose a solution that

reduces the adversary\'s advantage substantially, thereby achieving near-ideal

incentive-compatibility for executing and verifying computation in our

consensus verifiability model. We further propose two different but

complementary approaches to implement our solution in real cryptocurrency

networks like Ethereum. We show the feasibility of such approaches for a set of

practical outsourced computation tasks as case studies.

18:11 [Pub][ePrint] Point-Function Obfuscation: A Framework and Generic Constructions, by Mihir Bellare and Igors Stepanovs

  We unify the many prior variants of point-function obfuscation via a definitional framework in which security is parameterized by a class of algorithms we call target generators, with different notions corresponding to different choices of this class. This leads to an elegant question, namely whether it is possible to provide a generic construction, meaning one that takes an arbitrary class of target generators and returns a point-function obfuscator secure for it. We answer this in the affirmative with three generic constructions, the first based on indistinguishability obfuscation, the second on deterministic public-key encryption and the third on universal computational extractors. By exploiting known constructions of the primitives assumed, we obtain a host of new point-function obfuscators, including many under standard assumptions.