*12:51* [Job][New]
PhD + job in industry, *Université Paris 7*
Biometric authentication (1 to 1 matching) and identification (1 to many) are becoming increasingly popular. To speed-up these operations, we wish to explore the design of dedicated hardware accelerators for fingerprint matching. We propose, in collaboration with an industrial partner, a funded PhD thesis on this topic. The student will be based in the south of France (Aix en Provence area) and will benefit from an academic R&D environment in which several PhD students work already. The candidate should have good knowledge of HDL design and a creative mindset. Good mastery of mathematics in general and of signal processing in particular is a plus.

The successful candidate should have a Master’s degree in hardware design or in microelectronics or a related field. Knowledge of cryptography and hardware security is an asset. Knowledge of French is not required.

*12:50* [Job][New]
Ph.D. student + an industrial job., *Université Paris 1, Panthéon-Sorbonne.*
Very frequently open source microprocessors are used to derive secure versions. The hardening of such devices against attacks can be done either invasively (by physically changing the device) or non-invasively (by adding circuits around the device and software inside it). We propose, in collaboration with an industrial partner, a funded PhD thesis on this topic. The student will be based in the Paris area and will benefit from an R&D environment in which several PhD students work already. The candidate should have good knowledge of HDL design and a creative mindset.

The successful candidate should have a Master’s degree in hardware design or in microelectronics or a related field. Knowledge of cryptography and hardware security is an asset. Knowledge of French is not required.

*12:50* [Job][New]
Research Associate in Zero-Knowledge Proofs, *University College London*
The Computer Science Department at University College London has an open postdoctoral research position under the supervision of Dr Jens Groth. The Research Associate is funded by an ERC Starting Grant on Efficient Cryptographic Arguments and Proofs with a flexible start date and a duration of up to 3 years.

Candidates must have a PhD with a strong publication record in cryptography or theoretical computer science. Research experience in zero-knowledge proofs, probabilistically checkable proofs or lattice-based cryptography will be considered a plus.

University College London is one of Europe\'s highest ranked universities and has recently been recognized by the EPSRC and GCHQ as one of UK\'s Academic Centres of Excellence in Cyber Security Research. The Computer Science Department is one of the largest in the UK and is located at UCL\'s main campus in the centre of London.

*09:17* [Pub][JoC]
Compact Proofs of Retrievability
Abstract In a proof-of-retrievability system, a data storage center must prove to a verifier that he is actually storing all of a client’s data. The central challenge is to build systems that are both efficient and *provably* secure—that is, it should be possible to extract the client’s data from any prover that passes a verification check. In this paper, we give the first proof-of-retrievability schemes with full proofs of security against arbitrary adversaries in the strongest model, that of Juels and Kaliski. Our first scheme, built from BLS signatures and secure in the random oracle model, features a proof-of-retrievability protocol in which the client’s query and server’s response are both extremely short. This scheme allows public verifiability: anyone can act as a verifier, not just the file owner. Our second scheme, which builds on pseudorandom functions (PRFs) and is secure in the standard model, allows only private verification. It features a proof-of-retrievability protocol with an even shorter server’s response than our first scheme, but the client’s query is long. Both schemes rely on homomorphic properties to aggregate a proof into one small authenticator value.

- Content Type Journal Article
- Pages 1-42
- DOI 10.1007/s00145-012-9129-2
- Authors

- Hovav Shacham, University of California, San Diego, La Jolla, CA, USA
- Brent Waters, University of Texas at Austin, Austin, TX, USA

- Journal Journal of Cryptology
- Online ISSN 1432-1378
- Print ISSN 0933-2790

From: Sun, 02 Sep 2012 03:14:04 GMT
*08:12* [Job][New]
Ph.D. position, *Research Group Cryptographic Algorithms, Saarland University, Germany*
The Cryptographic Algorithms (CA) group in the Computer Science Department of Saarland University is currently offering a PhD position. The CA group is part of the newly established Center for IT-Security, Privacy and Accountability (CISPA). CISPA actively supports collaborations with other research centers worldwide, and offers young researchers an ideal working environment in every respect. The close connection of the CISPA to the department of computer science, the Max-Planck-Institute (MPI) for Informatics, the MPI for Software Systems, the German Research Center for Artificial Intelligence (DFKI), the Cluster of Excellence on Multimodal Computing and Interaction (MMCI), the Saarbrücken Graduate School of Computer Science and the Intel Visual Computing Institute (IVCI) is crucial for the success of the location. All of these institutes are in close proximity on the campus. The CA group conducts research in various aspects of cryptography. Topics of particular interest include, but are not limited to design of cryptographic algorithms and protocols as well as foundational research.

Applicants are required to have completed (or be close to completing) a Bachelor, Master, or Diplom with outstanding grades in Computer Science, Mathematics, or closely related areas. Additional knowledge in related disciplines such as, e.g., complexity theory is welcome. We stress that PhD applications immediately after the Bachelor degree are possible and welcome, as part of the Saarbruecken Graduate CS School. The working and teaching language is English.

Please send your application to Dominique Schroeder via e-mail. Applications should contain a CV, copies of transcripts and certificates, and (if possible) names of references. Applications will be accepted until the position has been filled.

*18:17* [Pub][ePrint]
Garbling XOR Gates ``For Free\'\' in the Standard Model, by Benny Applebaum
Yao\'s Garbled Circuit (GC) technique is a powerful cryptographic tool which allows to ``encrypt\'\' a circuit $C$ by another circuit $\\hC$ in a way that hides all information except for the final output. Yao\'s original construction incurs a constant overhead in both computation and communication per gate of the circuit $C$ (proportional to the complexity of symmetric encryption). Kolesnikov and Schneider (ICALP 2008) introduced an optimized variant that garbles XOR gates ``for free\'\' in a way that involves no cryptographic operations and no communication. This variant has become very popular and has been employed in several practical implementations leading to notable performance improvements.The security of the free-XOR optimization was originally proven in the random oracle model. In the same paper, Kolesnikov and Schneider also addressed the question of replacing the random oracle with a standard cryptographic assumption and suggested to use a hash function which achieves some form of security under correlated inputs. This claim was revisited by Choi et al. (TCC 2012) who showed that a stronger form of security is required, and proved that the free-XOR optimization can be realized based on a new primitive called \\emph{circular 2-correlation hash function}. Unfortunately, it is currently unknown how to implement this primitive based on standard assumptions, and so the feasibility of realizing the free-XOR optimization in the standard model remains an open question.

We resolve this question by showing that the free-XOR approach can be realized in the standard model under the \\emph{learning parity with noise} (LPN) assumption. Our result is obtained in two steps: (1) We show that the hash function can be replaced with a symmetric encryption which remains secure under a combined form of related-key and key-dependent attacks; and (2) We show that such a symmetric encryption can be constructed based on the LPN assumption.

*18:17* [Pub][ePrint]
Unconditionally Secure Asynchronous Multiparty Computation with Linear Communication Complexity, by Ashish Choudhury and Martin Hirt and Arpita Patra
We present two unconditionally secure asynchronous multiparty computation (AMPC) protocols among n parties with an amortized communication complexity of O(n) field elements per multiplication gate and which can tolerate a computationally unbounded active adversary corrupting t< n /4 parties. These are the first AMPC protocols with linear communication complexity per multiplication gate. Our first protocol is statistically secure in a completely asynchronous setting and improves on the previous best AMPC protocol in the same setting by a factor of \\Theta(n). Our second protocol is perfectly secure in a hybrid setting, where one round of communication is assumed to be synchronous and improves on the previous best AMPC protocol in the hybrid setting by a factor of \\Theta(n^2).

The central contribution common to both the protocols is a new, simple and communication efficient, albeit natural framework for the preprocessing (offline) phase that is used to generate sharings of random multiplication triples, to be used later for the circuit evaluation. The framework is built on two new components, both of which are instantiated robustly: the first component allows the parties to verifiably share random multiplication triples. The second component allows the parties to securely extract sharings of random multiplication triples from a set of sharings of multiplication triples, verifiably shared by individual parties. Our framework is simple and does not involve either of the existing somewhat complex, but popular techniques, namely player elimination and dispute control, used in the preprocessing phase of most of the existing protocols. The framework is of independent interest and can be adapted to other MPC scenarios to improve the overall round complexity.

*18:17* [Pub][ePrint]
Sequential Aggregate Signatures with Short Public Keys: Design, Analysis and Implementation Studies, by Kwangsu Lee and Dong Hoon Lee and Moti Yung
The notion of aggregate signature has been motivated by applications and it enables any user to compress different signatures signed by different signers on different messages into a short signature. Sequential aggregate signature, in turn, is a special kind of aggregate signature that only allows a signer to add his signature into an aggregate signature in sequential order. This latter scheme has applications in diversified settings, such as in reducing bandwidth of a certificate chains, and in secure routing protocols. Lu, Ostrovsky, Sahai, Shacham, and Waters presented the first sequential aggregate signature scheme in the standard (non idealized ROM) model. The size of their public key, however, is quite large (i.e., the number of group elements is proportional to the security parameter), and therefore they suggested as an open problem the construction of such a scheme with short keys. Schr\\\"oder recently proposed a sequential aggregate signature (SAS) with short public keys using the Camenisch-Lysyanskaya signature scheme, but the security is only proven under an interactive assumption (which is considered a relaxed notion of security).In this paper, we propose the first sequential aggregate signature scheme with short public keys (i.e., a constant number of group elements) in prime order (asymmetric) bilinear groups which is secure under static assumptions in the standard model. Further, our scheme employs constant number of pairing operation per message signing and message verification operation. Technically, we start with a public key signature scheme based on the recent dual system encryption technique of Lewko and Waters. This technique cannot give directly an aggregate signature scheme since, as we observed, additional elements should be published in the public key to support aggregation (and these may, in fact, invalidate the security arguments). Thus, our construction is a careful augmentation technique for the dual system technique to allow it to support a sequential aggregate signature scheme via randomized verification. We further implemented our scheme and conducted a performance study and implementation optimization.

*18:17* [Pub][ePrint]
Faster implementation of scalar multiplication on Koblitz curves, by Diego F. Aranha and Armando Faz-Hernández and Julio López and Francisco Rodríguez-Henríquez
We design a state-of-the-art software implementation of field and elliptic curve arithmetic in standard Koblitz curves at the 128-bit security level. Field arithmetic is carefully crafted by using the best formulae and implementation strategies available, and the increasingly common native support to binary field arithmetic in modern desktop computing platforms. The i-th power of the Frobenius automorphism on Koblitz curves is exploited to obtain new and faster interleaved versions of the well-known $\\tau$NAF scalar multiplication algorithm. The usage of the $\\tau^{\\lfloor m/3 \\rfloor}$ and$\\tau^{\\lfloor m/4 \\rfloor}$ maps are employed to create analogues of the 3-and 4-dimensional GLV decompositions and in general, the $\\lfloor m/s \\rfloor$-th power of the Frobenius automorphism is applied as an analogue of an $s$-dimensional GLV decomposition. The effectiveness of these techniques is illustrated by timing the scalar multiplication operation for fixed, random and multiple points. To our knowledge, our library was the first to compute a random point scalar multiplication in less than 10^5 clock cycles among all curves with or without endomorphisms defined over binary or prime fields. The results of our optimized implementation suggest a trade-off between speed, compliance with the published standards and side-channel protection. Finally, we estimate the performance of curve-based cryptographic protocols instantiated using the proposed techniques and compare our results to related work.