International Association for Cryptologic Research

IACR News Central

Get an update on changes of the IACR web-page here. For questions, contact newsletter (at) iacr.org. You can also receive updates via:

To receive your credentials via mail again, please click here.

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

2015-01-02
22:17 [Pub][ePrint] On the Cryptographic Hardness of Finding a Nash Equilibrium, by Nir Bitansky and Omer Paneth and Alon Rosen

  We prove that finding a Nash equilibrium of a game is hard, assuming the existence of indistinguishability obfuscation and injective one-way functions with sub-exponential hardness. We do so by showing how these cryptographic primitives give rise to a hard computational problem that lies in the complexity class PPAD, for which finding Nash equilibrium is known to be complete.

Previous proposals for basing PPAD-hardness on program obfuscation considered a strong \"virtual black-box\" notion that is subject to severe limitations and is unlikely to be realizable for the programs in question. In contrast, for indistinguishability obfuscation no such limitations are known, and recently, several candidate constructions of indistinguishability obfuscation were suggested based on different hardness assumptions on multilinear maps.

Our result provides further evidence of the intractability of finding a Nash equilibrium, one that is extrinsic to the evidence presented so far.



13:17 [Pub][ePrint] How to Generate Repeatable Keys Using Physical Unclonable Functions Correcting PUF Errors with Iteratively Broadening and Prioritized Search, by Nathan E. Price and Alan T. Sherman

  We present an algorithm for repeatably generating keys using entropy from a Physical Unclonable Function (PUF). PUFs are logically identical physical constructs with Challenge-Response Pairs (CRPs) unique to each device. Applications include initialization of server keys and encryption of FPGA configuration bitstreams. One problem with PUFs is response errors. Our algorithm corrects PUF errors that inhibit key repeatability.

Our approach uses a PUF to generate an error-free PUF value in three steps. First, we repeatedly sample the PUF to determine the most likely value. Second, we apply an iteratively-broadening search to search up to some number of bit errors (in our experiments we use two). Third, we apply exhaustive search until the correct value is found or failure is declared. The searches are prioritized by the known bit error rates in decreasing magnitude. We assume the application includes a test for the correct value (e.g., some matching plaintext-ciphertext pairs).

Previous algorithms often omit noisy PUF bits or use error-correcting codes and helper data. Our algorithm can use all PUF bits regardless of noise. Our approach is simple, and for appropriate parameter choices, fast. Unlike previous approaches using error-correcting codes, when used for public-key cryptography our method requires storing only the public key and no other helperdata in non-volatile storage.

We implemented a latch-based PUF on FPGAs and measured PUF characteristics to analyze the effectiveness of the algorithm. Tests for a 1024-bit PUF show 351 samples reduce the probability of errors to less than 10^-6. The iterative broadening and exhaustive searches further reduce failure rates.



13:17 [Pub][ePrint] Cryptanalysis of a New Additive Homomorphic Encryption based on the co-ACD Problem, by Moon Sung Lee

  In CCS\'14, Cheon et al. proposed a new additive homomorphic encryption scheme

which is claimed to be the most efficient among the additive homomorphic encryption schemes.

The security is proved based on the hardness of a new problem, the (decisional) co-approximate common divisor problem.

In this paper,

we cryptanalyze the scheme and investigate the hardness of an aforementioned problem.

Our first result

shows that

Cheon et al.\'s scheme is insecure for the range of parameters considered in the original paper~\\cite{CLSCCS14}.

Experiments show that the message can be recovered in seconds for the proposed parameters.

We also analyze the condition of the parameters to thwart the proposed attack.

As a second result,

we show that the co-approximate common divisor problem

is easy for the similar range of parameters,

in condition that the modulus is known and is a product of two primes.

In our estimate, to thwart the proposed attack,

the parameters should be enlarged many times.

Apart from the scheme,

the co-approximate common divisor problem itself is interestingly related

to the well-known hard problem, an approximate common divisor problem.

And further investigation on this relationship would be desirable.



13:17 [Pub][ePrint] XPIRe: Private Information Retrieval for Everyone, by Carlos Aguilar-Melchor and Joris Barrier and Laurent Fousse and Marc-Olivier Killijian

  A single-database computationally-Private Information Retrieval (hereafter PIR) scheme is a protocol in which a user retrieves a record from a database while hiding which from the database administrators. Classic protocols require that the database server execute an algorithm over all the database content at very low speeds which impairs significantly their usability.

In [1], given certain assumptions, realistic at the time, Sion and Carbunar showed that PIR schemes were not practical and most likely would never be. To this day, this conclusion is widely accepted by researchers and practitioners. Using the paradigm shift introduced by lattice-based cryptography, we show that the conclusion of Sion and Carbunar is not valid anymore: PIR is of practical value. This is achieved without compromising security, using simple standard encryption schemes, and very conservative parameter choices.

In order to prove this, we provide a fast fully-usable PIR library and do a performance analysis, illustrated by use-cases, highlighting that this library is practical in a large span of situations. The PIR library allows a server to process its data at a throughput ranging from 1 Gbps on a single core of a commodity CPU to almost 10 Gbps on a high-end processor using its multi-core capabilities.

After replying to a first query (or through pre-computation), there is a x3 to x5 speedup for subsequent queries (if the database content is unchanged for those queries). The performance analysis shows for example that it is possible to privately receive an HD movie from a Netflix-like database (with 35K movies) with enough throughput to watch it in real time, or to build a sniffer over a Gbit link with an obfuscated code that hides what it is sniffing.

This library is modular, allowing alternative homomorphic encryption modules to be plugged-in. We provide a slow but compact crypto module with a number theoretic Paillier encryption, and a fast crypto module with Ring-LWE based encryption (based on standard lattice-based problems and conservative parameters). The library has an auto-optimizer that chooses the best protocol parameters (recursion level, aggregation) and crypto parameters (if the crypto module implements the necessary API) for a given setting.

This greatly increases the usability for non-specialists. Given the complexity of parameter settings in lattice-based homomorphic encryption and the fact that PIR adds a second layer of choices that interact with crypto parameter settings, we believe this auto-optimizer will also be useful to specialists.



13:17 [Pub][ePrint] Lattices with Symmetry, by H. W. Lenstra, Jr. and A. Silverberg

  For large ranks, there is no good algorithm that decides whether a given lattice has an orthonormal basis. But when the lattice is given with enough symmetry, we can construct a provably deterministic polynomial-time algorithm to accomplish this, based on the work of Gentry and Szydlo. The techniques involve algorithmic algebraic number theory, analytic number theory, commutative algebra, and lattice basis reduction.



13:17 [Pub][ePrint] Simple Lattice Trapdoor Sampling from a Broad Class of Distributions, by Vadim Lyubashevsky and Daniel Wichs

  At the center of many lattice-based constructions is an algorithm that samples a short vector $s$, satisfying $[A|AR-HG]s=t\\bmod q$ where $A,AR, H, G$ are public matrices and $R$ is a trapdoor. Although the algorithm crucially relies on the knowledge of the trapdoor $R$ to perform this sampling efficiently, the distribution it outputs should be independent of $R$ given the public values. We present a new, simple algorithm for performing this task. The main novelty of our sampler is that the distribution of $s$ does not need to be Gaussian, whereas all previous works crucially used the properties of the Gaussian distribution to produce such an $s$. The advantage of using a non-Gaussian distribution is that we are able to avoid the high-precision arithmetic that is inherent in Gaussian sampling over arbitrary lattices. So while the norm of our output vector $s$ is on the order of $\\sqrt{n}$ to $n$ - times larger (the representation length, though, is only a constant factor larger) than in the samplers of Gentry, Peikert, Vaikuntanathan (STOC 2008) and Micciancio, Peikert (EUROCRYPT 2012), the sampling itself can be done very efficiently. This provides a useful time/output trade-off for devices with constrained computing power. In addition, we believe that the conceptual simplicity and generality of our algorithm may lead to it finding other applications.



13:17 [Pub][ePrint] Security Weaknesses of an Anonymous Attribute Based Encryption\" appeared in ASIACCS\'13, by Payal Chaudhari and Maniklal Das and Anish Mathuria

  Attribute-based Encryption (ABE) has found enormous application in

ne-grained access control of shared data, particularly in public cloud. In 2013, Zhang et al proposed a scheme called match-then-decrypt [1], where before running the decryption algorithm the user requires to perform a match operation with attribute(s) that provides the required information to identify whether a particular user is the intended recipient for the ciphertext. As in [1], the match-then-decrypt operation saves the computational cost at the receiver and the scheme supports receivers\' anonymity. In this paper, we show that Zhang et al \'s scheme [1] does not support receivers\' anonymity. Any legitimate user or an adversary can successfully check whether an attribute is required in the matching phase, in turn, can reveal the receivers\' identity from the attribute.





2015-01-01
16:17 [Pub][ePrint] Topology-Hiding Computation, by Tal Moran and Ilan Orlov and Silas Richelson

  Secure Multi-party Computation (MPC) is one of the foundational achievements of modern cryptography,

allowing multiple, distrusting, parties to jointly compute a function of their inputs, while revealing nothing but the

output of the function. Following the seminal works of Yao and Goldreich, Micali and Wigderson and Ben-Or, Goldwasser and Wigderson,

the study of MPC has expanded to consider a wide variety of questions, including variants in the attack model,

underlying assumptions, complexity and composability of the resulting protocols.

One question that appears to have received very little attention, however, is that of MPC over an

underlying communication network whose structure is, in itself, sensitive information. This question, in addition to being

of pure theoretical interest, arises naturally in many contexts: designing privacy-preserving social-networks, private peer-to-peer computations,

vehicle-to-vehicle networks and the ``internet of things\'\' are some of the examples.

In this paper, we initiate the study of ``topology-hiding computation\'\' in the computational setting. We give formal definitions

in both simulation-based and indistinguishability-based flavors. We show that, even for fail-stop adversaries, there are some strong

impossibility results. Despite this, we show that protocols for topology-hiding computation can be constructed in the semi-honest

and fail-stop models, if we somewhat restrict the set of nodes the adversary may corrupt.





2014-12-31
19:17 [Pub][ePrint] Algebraic Algorithms for LWE Problems, by Martin Albrecht and Carlos Cid and Jean-Charles Faugère and Robert Fitzpatrick and Ludovic Perret

  We analyse the complexity of algebraic algorithms for solving systems of linear equations with \\emph{noise}. Such systems arise naturally in the theory of error-correcting codes as well as in computational learning theory. More recently, linear systems with noise have found application in cryptography. The \\emph{Learning with Errors} (LWE) problem has proven to be a rich and versatile source of innovative cryptosystems, such as fully homomorphic encryption schemes. Despite the popularity of the LWE problem, the complexity of algorithms for solving it is not very well understood, particularly when variants of the original problem are considered. Here, we focus on and generalise a particular method for solving these systems, due to Arora \\& Ge, which reduces the problem to non-linear but noise-free system solving. Firstly, we provide a refined complexity analysis for the original Arora-Ge algorithm for LWE. Secondly, we study the complexity of applying algorithms for computing Gröbner basis, a fundamental tool in computational commutative algebra, to solving Arora-Ge-style systems of non-linear equations. We show positive and negative results. On the one hand, we show that the use of Gröbner bases yields an exponential speed-up over the basic Arora-Ge approach. On the other hand, we give a negative answer to the natural question whether the use of such techniques can yield a subexponential algorithm for the LWE problem. Under a mild algebraic assumption, we show that it is highly unlikely that such an improvement exists.

We also consider a variant of LWE known as BinaryError-LWE introduced by Micciancio and Peikert recently. By combining Gröbner basis algorithms with the Arora-Ge modelling, we show under a natural algebraic assumption that BinaryError-LWE can be solved in subexponential time as soon as the number of samples is quasi-linear, e.g.\\

$m=O(n \\log \\log n)$. We also derive precise complexity bounds for BinaryError-\\LWE with $m=O(n)$, showing that this new approach yields better results than best currently-known generic (exact) CVP solver as soon as $m/n \\geq 6.6$. More generally, our results provide a good picture of the hardness degradation of BinaryError-LWE for a number of samples ranging from $m=n\\left(1+\\Omega\\big(1/{\\rm log}(n)\\big)$ (a case for which BinaryError-\\LWE{} is as hard as solving some lattice problem in the worst case) to $m=O(n^2)$ (a case for which it can be solved in polynomial-time). This addresses an open question from Micciancio and Peikert. Whilst our results do not contradict the hardness results obtained by Micciancio and Peikert, they should rule out BinaryError-\\LWE for many cryptographic applications. The results in this work depend crucially on the assumption the algebraic systems considered systems are not easier and not harder to solve than a random system of equations. We have verified experimentally such hypothesis. We also have been able to prove formally the assumptions is several restricted situations. We emphasize that these issues are highly non-trivial since proving our assumptions in full generality would allow to prove a famous conjecture in commutative algebra known as Fröberg\'s Conjecture.



19:17 [Pub][ePrint] Related-Key Differential Cryptanalysis of Reduced-Round ITUBee, by Xiaoming Tang and Weidong Qiu and Zheng Gong and Zheng Huang and Jie Guo

  ITU{\\scriptsize{BEE}} is a software oriented lightweight block cipher, which is first proposed at LightSec 2013. The cipher is especially suitable for limited resource application, such as sensor nodes in wireless sensor networks. To evaluate the security level of the cipher, we perform differential attacks on ITU{\\scriptsize{BEE}} reduced to 10 rounds and 11 rounds with the time complexities ${2^{65.97}}$ and ${2^{79.03}}$, respectively. To our best knowledge, our analysis is the first related-key differential cryptanalysis on the security of 10-round ITU{\\scriptsize{BEE}}.



19:17 [Pub][ePrint] Side-Channel Leakage and Trace Compression using Normalized Inter-Class Variance, by Shivam Bhasin and Jean-Luc Danger and Sylvain Guilley and Zakaria Najm

  Security and safety critical devices must undergo penetration testing including Side-Channel Attacks (SCA) before certification.

SCA are powerful and easy to mount but often need huge computation power, especially in the presence of countermeasures.

Few efforts have been done to reduce the computation complexity of SCA by selecting a small subset of points where leakage prevails.

In this paper, we propose a method to detect relevant leakage points in side-channel traces.

The method is based on Normalized Inter-Class Variance (NICV).

A key advantage of NICV over state-of-the-art is that NICV does neither need a clone device nor the knowledge of secret parameters of the crypto-system.

NICV has a low computation requirement and it detects leakage using public information like input plaintexts or output ciphertexts only.

It is shown that NICV can be related to Pearson correlation and signal to noise ratio (SNR) which are standard metrics.

NICV can be used to theoretically compute the minimum number of traces required to attack an implementation.

A theoretical rationale of NICV with some practical application on real crypto-systems are provided to support our claims.