*18:17*[Pub][ePrint]

Get an update on changes of the IACR web-page here. For questions, contact *newsletter (at) iacr.org*.
You can also receive updates via:

To receive your credentials via mail again, please click here.

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

A two-to-one recoding (TOR) scheme is a new cryptographic primitive, proposed in the recent work of Gorbunov, Vaikuntanathan, and Wee (GVW), as a means to construct attribute-based encryption

(ABE) schemes for all boolean circuits. GVW show that TOR schemes can be constructed assuming the hardness of the learning-with-errors (LWE) problem.

We propose a slightly weaker variant of TOR schemes called correlation-relaxed two-to-one recoding (CR-TOR). Unlike the TOR schemes, our weaker variant does not require an encoding function to

be pseudorandom on correlated inputs. We instead replace it with an indistinguishability property that states a ciphertext is hard to decrypt without access to a certain encoding. The primary benefit of this relaxation is that it allows the construction of ABE for circuits using the TOR paradigm from a broader class of cryptographic assumptions.

We show how to construct a CR-TOR scheme from the noisy cryptographic multilinear maps of Garg, Gentry, and Halevi as well as those of Coron, Lepoint, and Tibouchi. Our framework leads to an instantiation of ABE for circuits that is conceptually different from the existing constructions.

In a related-key attack (RKA) an adversary attempts to break a cryptographic primitive by invoking the primitive with several secret keys which satisfy some known relation. The task of constructing provably RKA secure PRFs (for non-trivial relations) under a standard assumption has turned to be challenging. Currently, the only known provably-secure construction is due to Bellare and Cash (Crypto 2010). This important feasibility result is restricted, however, to linear relations over relatively complicated groups (e.g., $\\Z^*_q$ where $q$ is a large prime) that arise from the algebraic structure of the underlying cryptographic assumption (DDH/DLIN). In contrast, applications typically require RKA-security with respect to simple additive relations such as XOR or addition modulo a power-of-two.

In this paper, we partially fill this gap by showing that it is possible to deal with simple additive relations at the expense of relaxing the model of the attack. We introduce several natural relaxations of RKA-security, study the relations between these notions, and describe efficient constructions either under lattice assumptions or under general assumptions. Our results enrich the landscape of RKA security and suggest useful trade-offs between the attack model and the family of possible relations.

Computer log files constitute a precious resource for system administrators for discovering and comprehending security breaches. A prerequisite of any meaningful log analysis is that attempts of intruders to cover their traces by modifying log entries are thwarted by storing them in a tamper-resistant manner. Some solutions employ cryptographic authentication when storing log entries locally, and let the authentication scheme\'s property of forward security ensure that the cryptographic keys in place at the time of intrusion cannot be used to manipulate past log entries without detection. This strong notion of security is typically achieved through frequent updates of the authentication keys via hash chains. However, as security demands that key updates take place rather often (ideally, at a resolution of milliseconds), in many settings this method quickly reaches the limits of practicality. Indeed, a log auditor aiming at verifying a specific log record might have to compute millions of hash iterations before recovering the correct verification key.

This problem was addressed only recently by the introduction of seekable sequential key generators (SSKG). Every instance of this cryptographic primitive produces a forward-secure sequence of symmetric (authentication) keys, but also offers an explicit fast-forward functionality. The only currently known SSKG construction replaces traditional hash chains by the iterated evaluation of a shortcut one-way permutation, a factoring-based and hence in practice not too efficient building block.

In this paper we revisit the challenge of marrying forward-secure key generation with seekability and show that symmetric primitives like PRGs, block ciphers, and hash functions suffice for obtaining secure SSKGs. Our scheme is not only considerably more efficient than the prior number-theoretic construction, but also extends the seeking functionality in a way that we believe is important in practice. Our construction is provably (forward-)secure in the standard model.

In recent years there has been a fantastic boom of increasingly sophisticated \'\'cryptographic objects\'\' -- identity-based encryption, fully-homomorphic encryption, functional encryption, and most recently, various forms of obfuscation. These objects often come in various flavors of security, and as these constructions have grown in number, complexity and inter-connectedness, the relationships between them have become increasingly confusing.

We provide a new framework of cryptographic agents that unifies various cryptographic objects and security definitions, similar to how the Universal Composition framework unifies various multi-party computation tasks like commitment, coin-tossing and zero-knowledge proofs.

Our contributions can be summarized as follows.

- Our main contribution is a new model of cryptographic computation, that unifies and extends cryptographic primitives such as Obfuscation, Functional Encryption, Fully Homomorphic Encryption, Witness encryption, Property Preserving Encryption and the like, all of which can be cleanly modeled as \'\'schemata\'\' in our framework. We provide a new indistinguishability preserving (IND-PRE) definition of security that interpolates indistinguishability and simulation style definitions, implying the former while (often) sidestepping the impossibilities of the latter.

- We present a notion of reduction from one schema to another and a powerful composition theorem with respect to IND-PRE security. This provides a modular means to build and analyze secure schemes for complicated schemata based on those for simpler schemata. Further, this provides a way to abstract out and study the relative complexity of different schemata. We show that obfuscation is a \'\'complete\'\' schema under this notion, under standard cryptographic assumptions.

- IND-PRE-security can be parameterized by the choice of the \'\'test\'\' family. For obfuscation, the strongest security definition (by considering all PPT tests) is unrealizable in general. But we identify a family of tests, called \\Delta, such that all known impossibility results, for obfuscation as well as functional encryption, are by-passed. On the other hand, for each of the example primitives we consider in this paper -- obfuscation, functional encryption, fully-homomorphic encryption and property-preserving encryption -- \\Delta-IND-PRE security for the corresponding schema implies the standard achievable security definitions in the literature.

- We provide a stricter notion of reduction that composes with respect to \\Delta-IND-PRE security.

- Based on \\Delta-IND-PRE-security we obtain a new definition for security of obfuscation, called adaptive differing-inputs obfuscation. We illustrate its power by using it for new constructions of functional encryption schemes, with and without function-hiding.

- Last but not the least, our framework can be used to model abstractions like the generic group model and the random oracle model, letting one translate a general class of constructions in these heuristic models to constructions based on standard model assumptions. We illustrate this by adapting a functional encryption scheme (for inner product predicate) that was shown secure in the generic group model to be secure based on a new standard model assumption we propose, called the generic bilinear group agents assumption.

Privacy-enhancing attribute-based credentials (PABCs) allow users to authenticate to verifiers in a data-minimizing way, in the sense that users are unlinkable between authentications and only disclose those attributes from their credentials that are relevant to the verifier. We propose a practical scheme to apply the same data minimization principle when the verifiers\' authentication logs are subjected to external audits. Namely, we propose an extended PABC scheme where the verifier can further remove attributes from presentation tokens before handing them to an auditor, while preserving the verifiability of the audited tokens. We present a generic construction based on a signature, a signature of knowledge and a trapdoor commitment scheme, prove it secure in the universal composability framework, and give efficient instantiations based on the strong RSA and Decision Composite Residuosity (DCR) assumptions in the random-oracle model.

A homomorphic signature scheme for a class of functions $\\mathcal{C}$ allows a client to sign and upload elements of some data set $D$ on a server. At any later point, the server can derive a (publicly verifiable) signature that certifies that some $y$ is the result computing some $f\\in\\mathcal{C}$ on the basic data set $D$. This primitive has been formalized by Boneh and Freeman (Eurocrypt 2011) who also proposed the only known construction for the class of multivariate polynomials of fixed degree $d\\geq 1$. In this paper we construct new homomorphic signature schemes for such functions. Our schemes provide the first alternatives to the one of Boneh-Freeman, and improve over their solution in three main aspects. First, our schemes do not rely on random oracles. Second, we obtain security in a stronger fully-adaptive model: while the solution of Boneh-Freeman requires the adversary to query messages in a given data set all at once, our schemes can tolerate adversaries that query one message at a time, in a fully-adaptive way. Third, signature verification is more efficient (in an amortized sense) than computing the function from scratch. The latter property opens the way to using homomorphic signatures for publicly-verifiable computation on outsourced data. Our schemes rely on a new assumption on leveled graded encodings which we show to hold in a generic model.

Cipher-policy attribute-based encryption (CP-ABE) is a more

efficient and flexible encryption system as the encryptor can control the access structure when encrypting a message. In this paper, we propose a privacy-preserving decentralized CP-ABE (PPDCP-ABE) scheme where the central authority is not required, namely each authority can work independently without the cooperation to initialize the system. Meanwhile, a user can obtain secret keys from multiple authorities without releasing his global identifier (GID) and attributes to them. This is contrasted to the previous privacy-preserving multi-authority ABE (PPMA-ABE) schemes where a user can obtain secret keys from multiple authorities with them knowing his attributes and a central authority is required. However, some sensitive attributes can also release the user\'s identity information. Hence, contemporary PPMA-ABE schemes cannot fully protect users\' privacy as multiple authorities can cooperate to identifier a user by collecting and analyzing his attributes. Therefore, it remains a challenging and important work to construct a PPMA-ABE scheme where the central authority is not required and both the identifiers and the attributes are considered

A Ciphertext-Policy Attribute-Based Encryption (CP-ABE) system extracts the decryption keys over attributes shared by multiple users. It brings plenty of advantages in ABE applications. CP-ABE enables fine-grained access control to the encrypted data for commercial applications. There has been significant progress in CP-ABE over two properties called traceability and large universe in the past few years, which enlarges the commercial applications of CP-ABE. Due to the nature of CP-ABE, it is difficult to identify the original key owner from an exposed key since the decryption privilege is shared by multiple users who have the same attributes. Thus, it is necessary for ABE applications to add the property of traceability to find out the malicious users who intentionally leak the partial or modified decryption keys to others for profits. On the other hand, the property of large universe in ABE proposed by Lewko and Waters enlarges the practical applications by supporting flexible number of attributes. Several systems have been proposed to obtain either of the above properties. None of them achieve the two properties simultaneously in practice, which limits the commercial applications of CP-ABE to a certain extent. In this paper, we propose a practical large universe CP-ABE system supporting white-box traceability, which is suitable for commercial applications. Compared to related work, our new system provides three advantages: (1) The number of attributes is not polynomially bounded; (2) Malicious users who leak their decryption keys could be traced; (3) The storage overhead for tracing is constant. We also prove the selective security of our new system in the standard model under \"q-type\" assumption.

2014-06-19

The Information Security Centre of Excellence (ISCX), University of New Brunswick, Fredericton, Canada has several openings for fully funded PhD/MS positions to work with the Centre to carry out research, design and development for the Intelligent Tools for an Automated Security Analysis and Risk Management for Large-Scale Systems project. The proposed toolset will incorporate a set of techniques for automated risk identification, security assessment (e.g., intrusion/malware detection & analysis) and mitigation planning. This project is funded under the Atlantic Innovation Foundation program. For more information about ISCX, please see www.iscx.ca.

Candidates are expected to be proficient in networking, have basic knowledge of computer security, to demonstrate interest in network security.

To apply send your CV.

Post-Doc positions,

Laboratoire de l’Informatique du Parallélisme

Post-doctoral positions are available in the AriC team, on functional cryptography and/or lattice-based cryptography.

An emerging group on cryptography is looking for excellent researchers to participate to projects on functional encryption and/or lattice-based cryptography, from the algorithmic foundations to the design and implementation of advanced cryptographic primitives.

Candidates must hold (or be close to holding) a PhD thesis related to cryptology, with an emphasis on its mathematical aspects. A strong research record is expected (i.e., publications in first-tier conferences or journals).

Applications should include a CV and recommendation letters. They should be sent before July 31, 2014 but will be considered until the positions are filled. Post-doc duration is negotiable. Salary can be adjusted for senior post-docs.