International Association for Cryptologic Research

# IACR News Central

Here you can see all recent updates to the IACR webpage. These updates are also available:

Now viewing news items related to:

11 February 2016
We construct an LWE-based key-policy attribute-based encryption (ABE) scheme that supports attributes of unbounded polynomial length. Namely, the size of the public parameters is a fixed polynomial in the security parameter and a depth bound, and with these fixed length parameters, one can encrypt attributes of arbitrary length. Similarly, any polynomial size circuit that adheres to the depth bound can be used as the policy circuit regardless of its input length (recall that a depth $d$ circuit can have as many as $2^d$ inputs). This is in contrast to previous LWE-based schemes where the length of the public parameters has to grow linearly with the maximal attribute length.

We prove that our scheme is semi-adaptively secure, namely, the adversary can choose the challenge attribute after seeing the public parameters (but before any decryption keys). Previous LWE-based constructions were only able to achieve selective security. (We stress that the complexity leveraging technique is not applicable for unbounded attributes.)

We believe that our techniques are of interest at least as much as our end result. Fundamentally, selective security and bounded attributes are both shortcomings that arise out of the current LWE proof techniques that program the challenge attributes into the public parameters. The LWE toolbox we develop in this work allows us to "delay" this programming. In a nutshell, the new tools include a way to generate an a-priori unbounded sequence of LWE matrices, and have fine-grained control over which trapdoor is embedded in each and every one of them, all with succinct representation.
We describe a public key encryption that is IND-CPA secure under the Learning with Errors (LWE) assumption, but that is not circular secure for arbitrary length cycles. Previous separation results for cycle length greater than 2 require the use of indistinguishability obfuscation, which is not currently realizable under standard assumptions.
ePrint Report Interactive Oracle Proofs Eli Ben-Sasson, Alessandro Chiesa, Nicholas Spooner
We initiate the study of a proof system model that naturally combines two well-known models: interactive proofs (IPs) and probabilistically-checkable proofs (PCPs). An *interactive oracle proof* (IOP) is an interactive proof in which the verifier is not required to read the prover's messages in their entirety; rather, the verifier has oracle access to the prover's messages, and may probabilistically query them.

IOPs simultaneously generalize IPs and PCPs. Thus, IOPs retain the expressiveness of PCPs, capturing NEXP rather than only PSPACE, and also the flexibility of IPs, allowing multiple rounds of communication with the prover. These degrees of freedom allow for more efficient "PCP-like" interactive protocols, because the prover does not have to compute the parts of a PCP that are not requested by the verifier.

As a first investigation into IOPs, we offer two main technical contributions. First, we give a compiler that maps any public-coin IOP into a non-interactive proof in the random oracle model. We prove that the soundness of the resulting proof is tightly characterized by the soundness of the IOP against *state restoration attacks*, a class of rewinding attacks on the IOP verifier. Our compiler preserves zero knowledge, proof of knowledge, and time complexity of the underlying IOP. As an application, we obtain blackbox unconditional ZK proofs in the random oracle model with quasilinear prover and polylogarithmic verifier, improving on the result of Ishai et al.\ (2015).

Second, we study the notion of state-restoration soundness of an IOP: we prove tight upper and lower bounds in terms of the IOP's (standard) soundness and round complexity; and describe a simple adversarial strategy that is optimal across all state restoration attacks.

Our compiler can be viewed as a generalization of the Fiat--Shamir paradigm for public-coin IPs (CRYPTO~'86), and of the "CS proof" constructions of Micali (FOCS~'94) and Valiant (TCC~'08) for PCPs. Our analysis of the compiler gives, in particular, a unified understanding of all of these constructions, and also motivates the study of state restoration attacks, not only for IOPs, but also for IPs and PCPs.
A memory-hard function (MHF) $f$ is equipped with a {\em space cost} $\sigma$ and {\em time cost} $\tau$ parameter such that repeatedly computing $f_{\sigma,\tau}$ on an application specific integrated circuit (ASIC) is not economically advantageous relative to a general purpose computer. Technically we would like that any (generalized) circuit for evaluating an iMHF $f_{\sigma,\tau}$ has area $\times$ time (AT) complexity at $\Theta(\sigma^2 * \tau)$. A data-independent MHF (iMHF) has the added property that it can be computed with almost optimal memory and time complexity by an algorithm which accesses memory in a pattern independent of the input value. Such functions can be specified by fixing a directed acyclic graph (DAG) $G$ on $n=\Theta(\sigma * \tau)$ nodes representing its computation graph.

In this work we develop new tools for analyzing iMHFs. First we define and motivate a new complexity measure capturing the amount of {\em energy} (i.e. electricity) required to compute a function. We argue that, in practice, this measure is at least as important as the more traditional AT-complexity. Next we describe an algorithm $\mathcal{A}$ for repeatedly evaluating an iMHF based on an arbitrary DAG $G$. We upperbound both its energy and AT complexities per instance evaluated in terms of a certain combinatorial property of $G$.

Next we instantiate our attack for several general classes of DAGs which include those underlying many of the most important iMHF candidates in the literature. In particular, we obtain the following results which hold for all choices of parameters $\sigma$ and $\tau$ (and thread-count) such that $n=\sigma*\tau$.

1) The Catena-Dragonfly function of~\cite{forler2013catena} has AT and energy complexities $O(n^{1.67})$.

2) The Catena-Butterfly function of~\cite{forler2013catena} has complexities is $O(n^{1.67})$.

3) The Double-Buffer and the Linear functions of~\cite{CBS16} both have complexities in $O(n^{1.67})$.

4) The Argon2i function of~\cite{Argon2} (winner of the Password Hashing Competition~\cite{PHC}) has complexities $O(n^{7/4}\log(n))$.

5) The Single-Buffer function of~\cite{CBS16} has complexities $O(n^{7/4}\log(n))$.

6) \emph{Any} iMHF can be computed by an algorithm with complexities $O(n^2/\log^{1-\epsilon}(n))$ for all $\epsilon > 0$. In particular when $\tau=1$ this shows that the goal of constructing an iMHF with AT-complexity $\Theta(\sigma^2 * \tau)$ is unachievable.

Along the way we prove a lemma upper-bounding the depth-robustness of any DAG which may prove to be of independent interest.
Event date: 29 May to 2 June 2016
10 February 2016
Job Posting Assistant Professor Télécom ParisTech, Paris
The missions of this permanent position is to conduct research and teach Hardware security in the Digital Electronics group of Telecom ParisTech. It also includes the development of industrial and academics collaboration.

Required competences:
• Theoretical and practical knowledge in the field of security for integrated circuits and embedded systems
• Architectures and HW/SW design methods of circuits and embedded systems
• Significant experience in research and teaching
• Fluency in english

Closing date for applications: 14 April 2016

Contact: Pr. Jean-Luc Danger
jean-luc.danger (at) telecom-paristech.fr
+33 1 45 81 81 17

We introduce the notion of an \emph{Extremely Lossy Function} (ELF). An ELF is a family of functions with an image size that is tunable anywhere from injective to having a polynomial-sized image. Moreover, for any efficient adversary, for a sufficiently large polynomial $r$ (necessarily chosen to be larger than the running time of the adversary), the adversary cannot distinguish the injective case from the case of image size $r$.

We develop a handful of techniques for using ELFs, and show that such extreme lossiness is useful for instantiating random oracles in several settings. In particular, we show how to use ELFs to build secure point function obfuscation with auxiliary input, as well as polynomially-many hardcore bits for any one-way function. Such applications were previously known from strong knowledge assumptions --- for example polynomially-many hardcore bits were only know from differing inputs obfuscation, a notion whose plausibility has been seriously challenged. We also use ELFs to build a simple hash function with \emph{output intractability}, a new notion we define that may be useful for generating common reference strings.

Next, we give a construction of ELFs relying on the \emph{exponential} hardness of the decisional Diffie-Hellman problem, which is plausible in pairing-based groups. Combining with the applications above, our work gives several practical constructions relying on qualitatively different --- and arguably better --- assumptions than prior works.
We consider the related notions of two-prover and of relativistic commitment schemes. In recent work, Lunghi et al. proposed a new relativistic commitment scheme with a multi-round sustain phase that keeps the binding property alive as long as the sustain phase is running. They prove security of their scheme against classical attacks; however, the proven bound on the error parameter is very weak: it blows up double exponentially in the number of rounds.

In this work, we give a new analysis of the multi-round scheme of Lunghi et al., and we show a linear growth of the error parameter instead (also considering classical attacks only). Our analysis is based on a new composition theorem for two-prover commitment schemes. The proof of our composition theorem is based on a better understanding of the binding property of two-prover commitments that we provide in the form of new definitions and relations among them. As an additional consequence of these new insights, our analysis is actually with respect to a strictly stronger notion of security than considered by Lunghi et al.
In this work we study the feasibility of knowledge extraction for succinct non-interactive arguments of knowledge (SNARKs) in a scenario that, to the best of our knowledge, has not been analyzed before. While prior work focuses on the case of adversarial provers that may receive (statically generated) {\em auxiliary information}, here we consider the scenario where adversarial provers are given {\em access to an oracle}. For this setting we study if and under what assumptions such provers can admit an extractor. Our contribution is mainly threefold.

First, we formalize the question of extraction in the presence of oracles by proposing a suitable proof of knowledge definition for this setting. We call SNARKs satisfying this definition O-SNARKs.

Second, we show how to use O-SNARKs to obtain formal and intuitive security proofs for three applications (homomorphic signatures, succinct functional signatures, and SNARKs on authenticated data) where we recognize an issue while doing the proof under the standard proof of knowledge definition of SNARKs.

Third -- and this is the most prominent part of our work -- we study whether O-SNARKs exist, providing both negative and positive results. On the negative side, we show that, assuming collision-resistant hash functions, there do not exist O-SNARKs in the standard model for every oracle family. Next, we study the interesting case of {\em signing} oracles. We give a negative result showing that even O-SNARKs for every signing oracle family do not exist. On the positive side, instead, we show that, when considering signature schemes with appropriate restrictions on the message length, O-SNARKs for the corresponding signing oracles exist, based on classical SNARKs and assuming extraction with respect to specific distributions of auxiliary input.
ePrint Report Scalable and Secure Logistic Regression via Homomorphic Encryption Yoshinori Aono, Takuya Hayashi, Le Trieu Phong, Lihua Wang
Logistic regression is a powerful machine learning tool to classify data. When dealing with sensitive data such as private or medical information, cares are necessary. In this paper, we propose a secure system for protecting both the training and predicting data in logistic regression via homomorphic encryption. Perhaps surprisingly, despite the non-polynomial tasks of training and predicting in logistic regression, we show that only additively homomorphic encryption is needed to build our system. Indeed, we instantiate our system with Paillier, LWE-based, and ring-LWE-based encryption schemes, highlighting the merits and demerits of each instance. Our system is very scalable in both the dataset size and dimension, tolerating big size for example of hundreds of millions ($10^8$s) records. Besides examining the costs of computation and communication, we carefully test our system over real datasets to demonstrate its accuracies and other related measures such as F-score and AUC.
Informally, a public-key encryption scheme is \emph{$k$-circular secure} if a cycle of~$k$ encrypted secret keys $(\pkcenc_{\pk_{1}}(\sk_{2}), \pkcenc_{\pk_{2}}(\sk_{3}), \ldots, \pkcenc_{\pk_{k}}(\sk_{1}))$ is indistinguishable from encryptions of zeros. Circular security has applications in a wide variety of settings, ranging from security of symbolic protocols to fully homomorphic encryption. A fundamental question is whether standard security notions like IND-CPA/CCA imply $k$-circular security.

For the case $k=2$, several works over the past years have constructed counterexamples---i.e., schemes that are CPA or even CCA secure but not $2$-circular secure---under a variety of well-studied assumptions (SXDH, decision linear, and LWE). However, for $k > 2$ the only known counterexamples are based on strong general-purpose obfuscation assumptions.

In this work we construct $k$-circular security counterexamples for any $k \geq 2$ based on (ring-)LWE. Specifically: \begin{itemize} \item for any constant $k=O(1)$, we construct a counterexample based on $n$-dimensional (plain) LWE for $\poly(n)$ approximation factors; \item for any $k=\poly(\lambda)$, we construct one based on degree-$n$ ring-LWE for at most subexponential $\exp(n^{\varepsilon})$ factors. \end{itemize} Moreover, both schemes are $k'$-circular insecure for $2 \leq k' \leq k$.

Notably, our ring-LWE construction does not immediately translate to an LWE-based one, because matrix multiplication is not commutative. To overcome this, we introduce a new tensored'' variant of LWE which provides the desired commutativity, and which we prove is actually equivalent to plain LWE.
ePrint Report Fast Multiparty Multiplications from shared bits Ivan Damgård, Tomas Toft, Rasmus Winther Zakarias
We study the question of securely multiplying N-bit integers that are stored in binary representation, in the context of protocols for dishonest majority with preprocessing. We achieve communication complexity O(N) using only secure operations over small fields F_2 and F_p with log(p) \approx log(N). For semi-honest security we achieve communication O(N)2^{O(log&#8727;(N))} using only secure operations over F_2. This improves over the straightforward solution of simulating a Boolean multiplication circuit, both asymptotically and in practice.
Private set operation (PSO) protocols provide a natural way of securely performing operations on data sets, such that crucial details of the input sets are not revealed. Such protocols have an ever-increasing number of practical applications, particularly when implementing privacy-preserving data mining schemes. Protocols for computing private set operations have been prevalent in multi-party computation literature over the past decade, and in the case of private set intersection (PSI), have become practically feasible to run in real applications. In contrast, other set operations such as union have received less attention from the research community, and the few existing designs are often limited in their feasibility. In this work we aim to fill this gap, and present a new technique using Bloom filter data structures and additive homomorphic encryption to develop the first private set union (PSU) protocol with both linear computation and communication complexities. The overheads of our protocol scale better with increasing set sizes than existing PSU designs and we thus provide the first potentially practical realisation of the PSU functionality. Moreover, we show how to adapt this protocol to give novel ways of computing PSI and private set intersection/union cardinality (PSI/PSU-CA). The resulting schemes have complexities that are comparable to current solutions that are considered practical. We therefore present an adaptable way for efficiently computing the main set operations, with linear complexities, and with the possibility of extending to compute other more complex functionalities. Our constructions can be proven secure with respect to both semi-honest and malicious adversaries.
ePrint Report Fully Anonymous Transferable Ecash Hitesh Tewari, Arthur Hughes
Numerous electronic cash schemes have been proposed over the years ranging from Ecash, Mondex to Millicent. However none of these schemes have been adopted by the financial institutions as an alternative to traditional paper based currency. The Ecash scheme was the closest to a system that mimicked fiat currency with the property that it provided anonymity for users when buying coins from the Bank and spending them at a merchant premises (from the Bank's perspective). However Ecash lacked one crucial element present in current fiat based systems i.e., the ability to continuously spend or transfer coins within the system. In this paper we propose an extension to the Ecash scheme which allows for the anonymous transfer of coins between users without the involvement of a trusted third party. We make use of a powerful technique which allows for distributed decision making within the network - namely the Bitcoin blockchain protocol. Combined with the proof-of-work technique and the classical discrete logarithm problem we are able to continuously reuse coins within our system, and also prevent double-spending of coins without revealing the identities of the users.
ePrint Report Access Control Encryption: Enforcing Information Flow with Cryptography Ivan Damgård, Helene Flyvholm Haagh, Claudio Orlandi
We initiate the study of Access Control Encryption (ACE), a novel cryptographic primitive that allows fine-grained access control, by giving different rights to different users not only in terms of which messages they are allowed to receive, but also which messages they are allowed to send.

Classical examples of security policies for information flow are the well known Bell-Lapadula [BL73] or Biba [Bib75] model: in a nutshell, the Bell-Lapadula model assigns roles to every user in the system (e.g., public, secret and top-secret). A users' role specifies which messages the user is allowed to receive (i.e., the no read-up rule, meaning that users with public clearance should not be able to read messages marked as secret or top-secret) but also which messages the user is allowed to send (i.e., the no write-down rule, meaning that a user with top-secret clearance should not be able to write messages marked as secret or public).

To the best of our knowledge, no existing cryptographic primitive allows for even this simple form of access control, since no existing cryptographic primitive enforces any restriction on what kind of messages one should be able to encrypt.

Our contributions are: - Introducing and formally defining access control encryption (ACE); - A construction of ACE with complexity linear in the number of the roles based on classic number theoretic assumptions (DDH, Paillier); - A construction of ACE with complexity polylogarithmic in the number of roles based on recent results on cryptographic obfuscation;
In 1978, Rivest, Adleman and Dertouzos asked for algebraic systems for which useful privacy homomorphisms exist. To date, the only acknownledged result is noise based encryption combined with bootstrapping. Before that, there were several failed attempts.

We prove that fully homomorphic schemes are impossible for several algebraic structures. Then we develop a characterisation of all fully homomorphic schemes and use it to analyse three examples. Finally, we propose a conjecture stating that secure FHE schemes must either have a significant ciphertext expansion or use unusual algebraic structures.
In this document we present an overview of the background to and goals of the Password Hashing Competition (PHC) as well as the design of its winner, Argon2, and its security requirements and properties.
Event date: 1 January to 15 May 2016
Event date: 24 October to 28 October 2016
8 February 2016
Event date: 25 July 2016
ePrint Report Speed Optimizations in Bitcoin Key Recovery Attacks Nicolas Courtois, Guangyan Song, Ryan Castellucci
In this paper we study and give the first detailed benchmarks on existing implementations of the secp256k1 elliptic curve used by at least hundreds of thousands of users in Bitcoin and other cryptocurrencies. Our implementation improves the state of the art by a factor of 2.5, with focus on the cases where side channel attacks are not a concern and a large quantity of RAM is available. As a result, we are able to scan the Bitcoin blockchain for weak keys faster than any previous implementation. We also give some examples of passwords which have we have cracked, showing that brain wallets are not secure in practice even for quite complex passwords.
ePrint Report Breaking the Sub-Exponential Barrier in Obfustopia Sanjam Garg, Omkant Pandey, Akshayaram Srinivasan, Mark Zhandry
Indistinguishability obfuscation (\io) has emerged as a surprisingly powerful notion. Almost all known cryptographic primitives can be constructed from general purpose \io\ and other minimalistic assumptions such as one-way functions. The primary challenge in this direction of research is to develop novel techniques for using \io\ since \io\ by itself offers virtually no protection to secret information in the underlying programs. When dealing with complex situations, often these techniques have to consider an exponential number of hybrids (usually one per input) in the security proof. This results in a {\em sub-exponential} loss in the security reduction. Unfortunately, this scenario is becoming more and more common and appears to be a fundamental barrier to current techniques.

In this work, we explore the possibility of getting around this {\em sub-exponential loss barrier} in constructions based on \io\ as well as the weaker notion of {\em functional encryption} (\fe). Towards this goal, we achieve the following results:

\begin{enumerate} \item We construct trapdoor one-way permutations from {\em polynomially-hard} \io\ (and standard one-way permutations). This improves upon the recent result of Bitansky, Paneth, and Wichs (TCC 2016) which requires \io\ of sub-exponential strength.

\item We present a different construction of trapdoor one-way permutations based on standard, polynomially-secure, public-key functional encryption. This qualitatively improves upon our first result since \fe\ is a weaker primitive than \io\ --- it can be based on polynomially-hard assumptions on multi-linear maps whereas \io\ inherently seems to requires assumptions of sub-exponential strength.

\item We present a construction of {\em universal samplers} also based only on polynomially-secure public-key \fe. Universal samplers, introduced in the work of Hofheinz, Jager, Khurana, Sahai, Waters and Zhandry (EPRINT 2014), is an appealing notion which allows a {\em single} trusted setup for {\em any} protocol. As an application of this result, we construct a {\em non-interactive multiparty key exchange} (NIKE) protocol for an unbounded number of users without a trusted setup. Prior to this work, such constructions were only known from indistinguishability obfuscation. \end{enumerate}

In obtaining our results, we build upon and significantly extend the techniques of Garg, Pandey, and Srinivasan (EPRINT 2015) introduced in the context of reducing $\PPAD$-hardness to polynomially-secure \io\ and \fe.
ePrint Report Signature Schemes with Efficient Protocols and Dynamic Group Signatures from Lattice Assumptions Benoit Libert, San Ling, Fabrice Mouhartem, Khoa Nguyen, Huaxiong Wang
A recent line of works - initiated by Gordon, Katz and Vaikuntanathan (Asiacrypt 2010) - gave lattice-based constructions allowing users to authenticate while remaining hidden in a crowd. Despite five years of efforts, known constructions are still limited to static sets of users, which cannot be dynamically updated. This work provides new tools enabling the design of anonymous authentication systems whereby new users can join the system at any time.

Our first contribution is a signature scheme with efficient protocols, which allows users to obtain a signature on a committed value and subsequently prove knowledge of a signature on a committed message. This construction is well-suited to the design of anonymous credentials and group signatures. It indeed provides the first lattice-based group signature supporting dynamically growing populations of users.

As a critical component of our group signature, we provide a simple joining mechanism of introducing new group members using our signature scheme. This technique is combined with zero-knowledge arguments allowing registered group members to prove knowledge of a secret short vector of which the corresponding public syndrome was certified by the group manager. These tools provide similar advantages to those of structure-preserving signatures in the realm of bilinear groups. Namely, they allow group members to generate their own public key without having to prove knowledge of the underlying secret key. This results in a two-message joining protocol supporting concurrent enrollments, which can be used in other settings such as group encryption.

Our zero-knowledge arguments are presented in a unified framework where: (i) The involved statements reduce to arguing possession of a $\{-1,0,1\}$-vector $\mathbf{x}$ with a particular structure and satisfying $\mathbf{P}\cdot \mathbf{x} = \mathbf{v} \bmod q$ for some public matrix $\mathbf{P}$ and vector $\mathbf{v}$; (ii) The reduced statements can be handled using permuting techniques for Stern-like protocols. Our framework can serve as a blueprint for proving many other relations in lattice-based cryptography.
ePrint Report On the Complexity of Scrypt and Proofs of Space in the Parallel Random Oracle Model Jo\"el Alwen, Binyi Chen, Chethan Kamath, Vladimir Kolmogorov, Krzysztof Pietrzak, Stefano Tessaro
We investigate lower bounds in terms of time and memory on the {\em parallel} complexity of an adversary $\cal A$ computing labels of randomly selected challenge nodes in direct acyclic graphs, where the $w$-bit label of a node is the hash $H(.)$ (modelled as a random oracle with $w$-bit output) of the labels of its parents. Specific instances of this general problem underlie both proofs-of-space protocols [Dziembowski et al. CRYPTO'15] as well as memory-hardness proofs including {\sf scrypt}, a widely deployed password hashing and key-derivation function which is e.g. used within Proofs-of-Work for digital currencies like Litecoin.

Current lower bound proofs for these problems only consider {\em restricted} algorithms $\cal A$ which perform only a single $H(.)$ query at a time and which only store individual labels (but not arbitrary functions thereof). This paper substantially improves this state of affairs; Our first set of results shows that even when allowing multiple parallel $\hash$ queries, the cumulative memory complexity'' (CMC), as recently considered by Alwen and Serbinenko [STOC '15], of ${\sf scrypt}$ is at least $w \cdot (n/\log(n))^2$, when ${\sf scrypt}$ invokes $H(.)$ $n$ times. Our lower bound holds for adversaries which can store (1) Arbitrary labels (i.e., random oracle outputs) and (2) Certain natural functions of these labels, e.g., linear combinations. The exact power of such adversaries is captured via the combinatorial abstraction of parallel entangled'' pebbling games on graphs, which we introduce and study.

We introduce a combinatorial quantity $\gamma_n$ and under the conjecture that it is upper bounded by some constant, we show that the above lower bound on the CMC also holds for arbitrary algorithms $\cal A$, storing in particular arbitrary functions of their labels. We also show that under the same conjecture, the {\em time complexity} of computing the label of a random node in a graph on $n$ nodes (given an initial $kw$-bit state) reduces tightly to the time complexity for entangled pebbling on the same graph (given an initial $k$-node pebbling). Under the conjecture, this solves the main open problem from the work of Dziembowski et al.

In fact, we note that every non-trivial upper bound on $\gamma_n$ will lead to the first non-trivial bounds for general adversaries for this problem.
7 February 2016
The only known way to achieve Attribute-based Fully Homomorphic Encryption (ABFHE) is through indistinguishability obfsucation. The best we can do at the moment without obfuscation is Attribute-Based Leveled FHE which allows circuits of an a priori bounded depth to be evaluated. This has been achieved from the Learning with Errors (LWE) assumption. However we know of no other way without obfuscation of constructing a scheme that can evaluate circuits of unbounded depth. In this paper, we present an ABFHE scheme that can evaluate circuits of unbounded depth but with one limitation: there is a bound N on the number of inputs that can be used in a circuit evaluation. The bound N could be thought of as a bound on the number of independent senders. Our scheme allows N to be exponentially large so we can set the parameters so that there is no limitation on the number of inputs in practice. Our construction relies on multi-key FHE and leveled ABFHE, both of which have been realized from LWE, and therefore we obtain a concrete scheme that is secure under LWE.
ePrint Report Haraka - Efficient Short-Input Hashing for Post-Quantum Applications Stefan Kölbl, Martin M. Lauridsen, Florian Mendel, Christian Rechberger
Many efficient cryptographic hash function design strategies have been explored recently, not least because of the SHA-3 competition. Almost exclusively these design are geared towards good performance for long inputs. However, various use cases exist where performance on short inputs matters more. An example is HMAC, and such functions also constituting the bottleneck of various hash-based signature schemes like SPHINCS, or XMSS which is currently under standardization. Secure functions specifically designed for such applications are scarce. In this paper, we fill this gap by proposing two short-input hash functions (or rather simply compression functions) exploiting instructions on modern CPUs that support the AES. To our knowledge these proposals are the fastest on modern high-end CPUs, reaching throughputs below one cycle per hashed byte even for short inputs while still having a very low latency of no more than 60 cycles. Under the hood, this results comes with several innovations.

First, we study whether the number of rounds for said functions can be reduced if collision resistance is not expected, but only second-preimage resistance. The conclusions is: only a little.

Second, since their inception AES-like designs allow for supportive security arguments by means of counting and bounding the number of active S-boxes. However, this ignores powerful attack vectors using truncated differentials, of which rebound attacks are a popular example. With our design, we develop for the first time a general tool-based method to include arguments against attack vectors using truncated differentials.
Bent functions shows some vital properties among all combinatorial objects. Its links in combinatorics, cryptography and coding theory attract the scientific community to construct new class of bent functions. Since the entire characterisation of bent functions is still unexplored but several construction on different algebraic structure is in progress. In this paper we proposed a generalized Maiorana-McFarland construction of bent function from Galois ring.
5 February 2016
Impossible differential and zero correlation linear cryptanalysis are two of the most important cryptanalytic vectors. To characterize the impossible differentials and zero correlation linear hulls which are independent of the choices of the non-linear components, Sun et al. proposed the structure deduced by a block cipher at CRYPTO 2015. Based on that, we concentrate in this paper on the security of the SPN structure and Feistel structure with SP-type round functions. Firstly, we prove that for an SPN structure, if \alpha_1\rightarrow\beta_1 and \alpha_2\rightarrow\beta_ are possible differentials, \alpha_1|\alpha_2\rightarrow\beta_1|\beta_2 is also a possible differential, i.e., the OR "|" operation preserves differentials. Secondly, we show that for an SPN structure, there exists an r-round impossible differential if and only if there exists an r-round impossible differential \alpha\not\rightarrow\beta where the Hamming weights of both \alpha and \beta are 1. Thus for an SPN structure operating on m bytes, the computation complexity for deciding whether there exists an impossible differential can be reduced from O(2^{2m}) to O(m^2). Thirdly, we associate a primitive index with the linear layers of SPN structures. Based on the matrices theory over integer rings, we prove that the length of impossible differentials of an SPN structure is upper bounded by the primitive index of the linear layers. As a result we show that, unless the details of the S-boxes are considered, there do not exist 5-round impossible differentials for the AES and ARIA. Lastly, based on the links between impossible differential and zero correlation linear hull, we projected these results on impossible differentials to zero correlation linear hulls. It is interesting to note some of our results also apply to the Feistel structures with SP-type round functions.
Known methods for obfuscating a circuit need to represent the circuit as a branching program and then use a multilinear map to encrypt the branching program. Multilinear maps are, however, too inefficient for encrypting the branching program. We found a dynamic encoding method which effectively singles out different inputs in the context of the matrix randomization technique of Kilian and Gentry et al., so that multilinear maps are no longer needed. To make the method work, we need the branching programs to be regular. For such branching programs, we also give most efficient constructions for NC1 circuits. This results in a much more efficient core obfuscator for NC1 circuits.
3 February 2016
Event date: 16 December to 18 December 2016