*21:17* [Pub][ePrint]
On the Achievability of Simulation-Based Security for Functional Encryption, by Angelo De Caro and Vincenzo Iovino Abhishek Jain and Adam O\'Neill and Omer Paneth and Giuseppe Persiano
This work attempts to clarify to what extent simulation-based security (SIM-security) is achievable for functional encryption (FE) and its relation to the weaker indistinguishability-based security (IND-security). Our main result is a compiler that transforms any FE scheme for the general circuit functionality (which we denote by circuit-FE) meeting indistinguishability-based security (IND-security) to a circuit-FE scheme meeting SIM-security, where:\\begin{itemize}

\\item In the random oracle model, the resulting scheme is secure for an unbounded number of encryption and key queries, which is the strongest security level one can ask for.

\\item In the standard model, the resulting scheme is secure for a bounded number of encryption and non-adaptive key queries, but an \\emph{unbounded} number of adaptive key queries. This matches known impossibility results and improves upon Gorbunov et al. [CRYPTO\'12] (which is secure for a \\emph{bounded} number of adaptive key queries).

\\end{itemize}

Our compiler is inspired by the celebrated Fiat-Lapidot-Shamir paradigm [FOCS\'90] for obtaining zero-knowledge proof systems from witness-indistinguishable proof systems.

As it is currently unknown whether circuit-FE meeting IND-security exists, the purpose of this result is to establish that it remains a good target for future research despite known deficiencies of IND-security [Boneh et al. -- TCC\'11, O\'Neill -- ePrint \'10].

We also give a tailored construction of SIM-secure hidden vector encryption (HVE) in composite-order bilinear groups.

Finally, we revisit the known negative results for SIM-secure FE, extending them to natural weakenings of the security definition and thus providing essentially a full picture of the (in)achievability of SIM-secure FE.

*21:17* [Pub][ePrint]
Structural Evaluation of AES and Chosen-Key Distinguisher of 9-round AES-128, by Pierre-Alain Fouque and Jérémy Jean and Thomas Peyrin
While the symmetric-key cryptography community has now a goodexperience on how to build a secure and efficient fixed permutation,

it remains an open problem how to design a key-schedule for block

ciphers, as shown by the numerous candidates broken in the related-key

model or in a hash function setting. Provable security against

differential and linear cryptanalysis in the related-key scenario is

an important step towards a better understanding of its construction.

Using a structural analysis, we show that the full AES-128 cannot be

proven secure unless the exact coefficients of the MDS matrix and the

S-Box differential properties are taken into account since its

structure is vulnerable to a related-key differential attack. We then

exhibit a chosen-key distinguisher for AES-128 reduced to 9 rounds,

which solves an open problem of the symmetric community. We obtain

these results by revisiting algorithmic theory and graph-based ideas

to compute all the best differential characteristics in SPN ciphers,

with a special focus on AES-like ciphers subject to related-keys. We

use a variant of Dijkstra\'s algorithm to efficiently find the most

efficient related-key attacks on SPN ciphers with an algorithm linear

in the number of rounds.

*21:17* [Pub][ePrint]
Security in $O(2^n)$ for the Xor of Two Random Permutations\\\\ -- Proof with the standard $H$ technique--, by Jacques Patarin
Xoring two permutations is a very simple way to construct pseudorandom functions from pseudorandom permutations. In~\\cite{P08a}, it is proved that we have security against CPA-2 attacks when $m \\ll O(2^n)$, where $m$ is the number of queries and $n$ is the number of bits of the inputsand outputs of the bijections. In this paper, we will obtain similar (but slightly different) results by using the

``standard H technique\'\' instead of the ``$H_{\\sigma}$ technique\'\'. It will be interesting to

compare the two techniques, their similarities and the differences between the proofs and the

results.

*18:55* [Job][New]
1 post-doc and 2 PhD posotions , *University of Luxembourg*
PhD and Post-doc Positions in Computer SecurityThe University of Luxembourg in collaboration with the Luxembourg Government (CTIE) will start a new research project “Supporting e-Democracy” for which we have two PhD candidates and one Research Associate (post-doc) positions available. The positions are within the APSIA (Applied Security and Information Assurance) (http://wwwen.uni.lu/snt/research/apsia) research group led by Prof. Dr. P.Y. Ryan.

The successful candidates will be working in an exciting, international and multicultural environment in the heart of Europe.

Project Description

The successful candidates will be working within the research project “Supporting e-Democracy”, collaboration between the University of Luxembourg and the Luxembourg Government (CTIE).

The research will focus on (a) design and security analysis of accurate and robust communication systems in specific electoral systems (b) design of reliable and secure computer assisted counting of paper ballots, (c) design and analysis of verifiable, computer assisted voting systems and a broader e-democracy platform.

Candidates Profile

The candidates are expected to have:

• A previous degree in computer science or related subject;

• A proven (theoretical and practical) interest in security;

• Knowledge of Network and System security;

• Fluent written and oral English skills;

Applications

The candidates must apply online at the following addresses:

PhD positions (open until July 31st, 2013): http://recruitment.uni.lu/en/details.html?id=QMUFK026203F3VBQB7V7VV4S8&nPostingID=2253&nPostingTargetID=2935&mask=karriereseiten&lg=UK

Research Associate position (open until June 20th, 2013): http://recruitment.uni.lu/en/details.html?id=QMUFK026203F3VBQB7V7VV4S8&nPostingID=2254&nPostingTargetID=2893&mask=karriereseiten&lg=UK

For further inquiries, please contact Prof. Dr. Peter Y. A. Ryan

*15:17* [Pub][ePrint]
Time-Optimal Interactive Proofs for Circuit Evaluation, by Justin Thaler
Several research teams have recently been working toward the development of practical general-purpose protocols for verifiable computation. These protocols enable a computationally weak verifier to offload computations to a powerful but untrusted prover, while providing the verifier with a guarantee that the prover performed the requested computations correctly. Despite substantial progress, existing implementations require further improvements before they become practical for most settings. The main bottleneck is typically the extra effort required by the prover to return an answer with a guarantee of correctness, compared to returning an answer with no guarantee.We describe a refinement of a powerful interactive proof protocol due to Goldwasser, Kalai, and Rothblum. Cormode, Mitzenmacher, and Thaler show how to implement the prover in this protocol in time $O(S \\log S)$, where $S$ is the size of an arithmetic circuit computing the function of interest. Our refinements apply to circuits with sufficiently ``regular\'\' wiring patterns; for these circuits, we bring the runtime of the prover down to $O(S)$. That is, our prover can evaluate the circuit with a guarantee of correctness, with only a constant-factor blowup in work compared to evaluating the circuit with no guarantee.

We argue that our refinements capture a large class of circuits, and we complement our theoretical results with experiments on problems such as matrix multiplication and determining the number of distinct elements in a data stream. Experimentally, our refinements yield a 200x speedup for the prover over the implementation of Cormode et al., and our prover is less than 10x slower than a C++ program that simply evaluates the circuit. Along the way, we describe a special-purpose protocol for matrix multiplication that is of interest in its own right.

Our final contribution is the design of an interactive proof protocol targeted at general data parallel computation. Compared to prior work, this protocol can more efficiently verify complicated computations as long as that computation is applied independently to many different pieces of data.