International Association for Cryptologic Research

IACR News Central

Get an update on changes of the IACR web-page here. For questions, contact newsletter (at) iacr.org. You can also receive updates via:

To receive your credentials via mail again, please click here.

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

2015-01-05
19:17 [Pub][ePrint] Balloon: A Forward-Secure Append-Only Persistent Authenticated Data Structure, by Tobias Pulls and Roel Peeters

  We present Balloon, a forward-secure append-only persistent authenticated data structure. Balloon is designed for an initially trusted author that generates events to be stored in a data structure (the Balloon) kept by an untrusted server, and clients that query this server for events intended for them based on keys and snapshots. The data structure is persistent such that clients can query keys for the current or past versions of the data structure based upon snapshots, which are generated by the author as new events are inserted. The data structure is authenticated in the sense that the server can verifiably prove all operations with respect to snapshots created by the author. No event inserted into the data structure prior to the compromise of the author can be modified or deleted without detection.

Balloon supports efficient (non-)membership proofs and verifiable inserts by the author, enabling the author to verify the correctness of inserts without having to store a copy of the Balloon. We also sketch how to use Balloon to enable client-specific forward-secure author consistency. In case of author inconsistency, a client can make a publicly verifiable statement that shows that the author was inconsistent with respect to his events.



10:17 [Pub][ePrint] A note on the security of Higher-Order Threshold Implementations, by Oscar Reparaz

  At ASIACRYPT 2014, Bilgin et al. describe higher-order threshold implementations: a masking countermeasure claiming resistance against higher-order differential power analysis attacks. In this note, we point out that higher-order threshold implementations do not necessarily provide higher-order security. We give as counterexamples two concrete higher-order threshold implementations that exhibit a second order flaw.



10:17 [Pub][ePrint] Characterization of MDS mappings, by S. M. Dehnavi and A. Mahmoodi Rishakani and M. R. Mirzaee Shamsabad

  MDS codes and matrices are closely related to combinatorial objects like orthogonal arrays and multipermutations. Conventional

MDS codes and matrices were defined on finite fields, but several generalizations of this concept has been done up to now.

In this note, we give a criterion for verifying whether a map is MDS or not.



10:17 [Pub][ePrint] Continuous Non-Malleable Key Derivation and Its Application to Related-Key Security, by Baodong Qin and Shengli Liu and Tsz Hon Yuen and Robert H. Deng and Kefei Chen

  Related-Key Attacks (RKAs) allow an adversary to observe the outcomes of a cryptographic primitive under not only its original secret key e.g., $s$, but also a sequence of modified keys $\\phi(s)$, where $\\phi$ is specified by the adversary from a class $\\Phi$ of so-called Related-Key Derivation (RKD) functions. This paper extends the notion of non-malleable Key Derivation Functions (nm-KDFs), introduced by Faust et al. (EUROCRYPT\'14), to \\emph{continuous} nm-KDFs. Continuous nm-KDFs have the ability to protect against any a-priori \\emph{unbounded} number of RKA queries, instead of just a single time tampering attack as in the definition of nm-KDFs. Informally, our continuous non-malleability captures the scenario where the adversary can tamper with the original secret key repeatedly and adaptively. We present a novel construction of continuous nm-KDF for any polynomials of bounded degree over a finite field. Essentially, our result can be extended to richer RKD function classes possessing properties of \\emph{high output entropy and input-output collision resistance}. The technical tool employed in the construction is the one-time lossy filter (Qin et al. ASIACRYPT\'13) which can be efficiently obtained under standard assumptions, e.g., DDH and DCR. We propose a framework for constructing $\\Phi$-RKA-secure IBE, PKE and signature schemes, using a continuous nm-KDF for the same $\\Phi$-class of RKD functions. Applying our construction of continuous nm-KDF to this framework, we obtain the first RKA-secure IBE, PKE and signature schemes for a class of polynomial RKD functions of bounded degree under \\emph{standard} assumptions. While previous constructions for the same class of RKD functions all rely on non-standard assumptions, e.g., $d$-extended DBDH assumption.



10:17 [Pub][ePrint] Oblivious Polynomial Evaluation and Secure Set-Intersection from Algebraic PRFs, by Carmit Hazay

  In this paper we study the two fundamental functionalities oblivious polynomial evaluation in the exponent and set-intersection, and introduce a new technique for designing efficient secure protocols for these problems (and others). Our starting point is the [BenabbasGV11] technique (CRYPTO 2011) for verifiable delegation of polynomial evaluations, using algebraic PRFs. We use this tool, that is useful to achieve verifiability in the outsourced setting, in order to achieve privacy in the standard two-party setting. Our results imply new simple and efficient oblivious polynomial evaluation (OPE) protocols. We further show that our OPE protocols are readily used for secure set-intersection, implying much simpler protocols in the plain model. As a side result, we demonstrate the usefulness of algebraic PRFs for various search functionalities, such as keyword search and oblivious transfer with adaptive queries. Our protocols are secure under full simulation-based definitions in the presence of malicious adversaries.





2015-01-02
22:17 [Pub][ePrint] On the Cryptographic Hardness of Finding a Nash Equilibrium, by Nir Bitansky and Omer Paneth and Alon Rosen

  We prove that finding a Nash equilibrium of a game is hard, assuming the existence of indistinguishability obfuscation and injective one-way functions with sub-exponential hardness. We do so by showing how these cryptographic primitives give rise to a hard computational problem that lies in the complexity class PPAD, for which finding Nash equilibrium is known to be complete.

Previous proposals for basing PPAD-hardness on program obfuscation considered a strong \"virtual black-box\" notion that is subject to severe limitations and is unlikely to be realizable for the programs in question. In contrast, for indistinguishability obfuscation no such limitations are known, and recently, several candidate constructions of indistinguishability obfuscation were suggested based on different hardness assumptions on multilinear maps.

Our result provides further evidence of the intractability of finding a Nash equilibrium, one that is extrinsic to the evidence presented so far.



13:17 [Pub][ePrint] How to Generate Repeatable Keys Using Physical Unclonable Functions Correcting PUF Errors with Iteratively Broadening and Prioritized Search, by Nathan E. Price and Alan T. Sherman

  We present an algorithm for repeatably generating keys using entropy from a Physical Unclonable Function (PUF). PUFs are logically identical physical constructs with Challenge-Response Pairs (CRPs) unique to each device. Applications include initialization of server keys and encryption of FPGA configuration bitstreams. One problem with PUFs is response errors. Our algorithm corrects PUF errors that inhibit key repeatability.

Our approach uses a PUF to generate an error-free PUF value in three steps. First, we repeatedly sample the PUF to determine the most likely value. Second, we apply an iteratively-broadening search to search up to some number of bit errors (in our experiments we use two). Third, we apply exhaustive search until the correct value is found or failure is declared. The searches are prioritized by the known bit error rates in decreasing magnitude. We assume the application includes a test for the correct value (e.g., some matching plaintext-ciphertext pairs).

Previous algorithms often omit noisy PUF bits or use error-correcting codes and helper data. Our algorithm can use all PUF bits regardless of noise. Our approach is simple, and for appropriate parameter choices, fast. Unlike previous approaches using error-correcting codes, when used for public-key cryptography our method requires storing only the public key and no other helperdata in non-volatile storage.

We implemented a latch-based PUF on FPGAs and measured PUF characteristics to analyze the effectiveness of the algorithm. Tests for a 1024-bit PUF show 351 samples reduce the probability of errors to less than 10^-6. The iterative broadening and exhaustive searches further reduce failure rates.



13:17 [Pub][ePrint] Cryptanalysis of a New Additive Homomorphic Encryption based on the co-ACD Problem, by Moon Sung Lee

  In CCS\'14, Cheon et al. proposed a new additive homomorphic encryption scheme

which is claimed to be the most efficient among the additive homomorphic encryption schemes.

The security is proved based on the hardness of a new problem, the (decisional) co-approximate common divisor problem.

In this paper,

we cryptanalyze the scheme and investigate the hardness of an aforementioned problem.

Our first result

shows that

Cheon et al.\'s scheme is insecure for the range of parameters considered in the original paper~\\cite{CLSCCS14}.

Experiments show that the message can be recovered in seconds for the proposed parameters.

We also analyze the condition of the parameters to thwart the proposed attack.

As a second result,

we show that the co-approximate common divisor problem

is easy for the similar range of parameters,

in condition that the modulus is known and is a product of two primes.

In our estimate, to thwart the proposed attack,

the parameters should be enlarged many times.

Apart from the scheme,

the co-approximate common divisor problem itself is interestingly related

to the well-known hard problem, an approximate common divisor problem.

And further investigation on this relationship would be desirable.



13:17 [Pub][ePrint] XPIRe: Private Information Retrieval for Everyone, by Carlos Aguilar-Melchor and Joris Barrier and Laurent Fousse and Marc-Olivier Killijian

  A single-database computationally-Private Information Retrieval (hereafter PIR) scheme is a protocol in which a user retrieves a record from a database while hiding which from the database administrators. Classic protocols require that the database server execute an algorithm over all the database content at very low speeds which impairs significantly their usability.

In [1], given certain assumptions, realistic at the time, Sion and Carbunar showed that PIR schemes were not practical and most likely would never be. To this day, this conclusion is widely accepted by researchers and practitioners. Using the paradigm shift introduced by lattice-based cryptography, we show that the conclusion of Sion and Carbunar is not valid anymore: PIR is of practical value. This is achieved without compromising security, using simple standard encryption schemes, and very conservative parameter choices.

In order to prove this, we provide a fast fully-usable PIR library and do a performance analysis, illustrated by use-cases, highlighting that this library is practical in a large span of situations. The PIR library allows a server to process its data at a throughput ranging from 1 Gbps on a single core of a commodity CPU to almost 10 Gbps on a high-end processor using its multi-core capabilities.

After replying to a first query (or through pre-computation), there is a x3 to x5 speedup for subsequent queries (if the database content is unchanged for those queries). The performance analysis shows for example that it is possible to privately receive an HD movie from a Netflix-like database (with 35K movies) with enough throughput to watch it in real time, or to build a sniffer over a Gbit link with an obfuscated code that hides what it is sniffing.

This library is modular, allowing alternative homomorphic encryption modules to be plugged-in. We provide a slow but compact crypto module with a number theoretic Paillier encryption, and a fast crypto module with Ring-LWE based encryption (based on standard lattice-based problems and conservative parameters). The library has an auto-optimizer that chooses the best protocol parameters (recursion level, aggregation) and crypto parameters (if the crypto module implements the necessary API) for a given setting.

This greatly increases the usability for non-specialists. Given the complexity of parameter settings in lattice-based homomorphic encryption and the fact that PIR adds a second layer of choices that interact with crypto parameter settings, we believe this auto-optimizer will also be useful to specialists.



13:17 [Pub][ePrint] Lattices with Symmetry, by H. W. Lenstra, Jr. and A. Silverberg

  For large ranks, there is no good algorithm that decides whether a given lattice has an orthonormal basis. But when the lattice is given with enough symmetry, we can construct a provably deterministic polynomial-time algorithm to accomplish this, based on the work of Gentry and Szydlo. The techniques involve algorithmic algebraic number theory, analytic number theory, commutative algebra, and lattice basis reduction.



13:17 [Pub][ePrint] Simple Lattice Trapdoor Sampling from a Broad Class of Distributions, by Vadim Lyubashevsky and Daniel Wichs

  At the center of many lattice-based constructions is an algorithm that samples a short vector $s$, satisfying $[A|AR-HG]s=t\\bmod q$ where $A,AR, H, G$ are public matrices and $R$ is a trapdoor. Although the algorithm crucially relies on the knowledge of the trapdoor $R$ to perform this sampling efficiently, the distribution it outputs should be independent of $R$ given the public values. We present a new, simple algorithm for performing this task. The main novelty of our sampler is that the distribution of $s$ does not need to be Gaussian, whereas all previous works crucially used the properties of the Gaussian distribution to produce such an $s$. The advantage of using a non-Gaussian distribution is that we are able to avoid the high-precision arithmetic that is inherent in Gaussian sampling over arbitrary lattices. So while the norm of our output vector $s$ is on the order of $\\sqrt{n}$ to $n$ - times larger (the representation length, though, is only a constant factor larger) than in the samplers of Gentry, Peikert, Vaikuntanathan (STOC 2008) and Micciancio, Peikert (EUROCRYPT 2012), the sampling itself can be done very efficiently. This provides a useful time/output trade-off for devices with constrained computing power. In addition, we believe that the conceptual simplicity and generality of our algorithm may lead to it finding other applications.