IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
27 March 2018
Guido Marco Bertoni, Lorenzo Grassi, Filippo Melzani
      In this paper we present a novel attack based on photonic
emission analysis targeting software implementations of AES. We focus on the particular case in which the attacker can collect the photonic emission of a limited number of sense amplifiers (e.g. only one) of the SRAM storing the S-Box. The attack consists in doing hypothesis on the secret key based on the knowledge of the partial output of the SubBytes operation. We also consider the possibility to attack a masked implementation of AES using the photonic emission analysis. In the case of masking, the attacker needs 2 leakages of the same encryption to overcome the randomization of the masks. For our analysis, we assume the same physical setup described in other previous works. Reported results are based on simulations with some hypothesis on the probability of photonic emission of a single transistor.
          
  Vireshwar Kumar, He Li, Noah Luther, Pranav Asokan, Jung-Min (Jerry) Park, Kaigui Bian, Martin B. H. Weiss, Taieb Znati
      In an anonymous subscription system (ASS), a subscribed user (SU) is able to access the services of a service provider without having to reveal its true identity. For a SU computing platform that is compliant with the Trusted Platform Module (TPM) standard, direct anonymous attestation (DAA) is an appropriate cryptographic protocol for realizing ASS, since DAA enables privacy-preserving authentication of the SU platform. This approach takes advantage of a cryptographic key that is securely embedded in the platform's hardware. Although the computing industry and academia have made significant strides in developing secure and sound DAA schemes, these schemes share a common drawback that may act as a major obstacle to their widespread deployment. In all of the existing schemes, the SU suffers from significant computational and communication costs that increase proportionally to the size of the revocation list. This drawback renders the existing schemes to be impractical when the size of the revocation list grows beyond a relatively modest size. In this paper, we propose a novel scheme called Lightweight Anonymous Subscription with Efficient Revocation (LASER) that addresses this very problem. In LASER, the computational and communication costs of the SU's signature are multiple orders of magnitude lower than the prior art. LASER achieves this significant performance improvement by shifting most of the computational and communication costs from the DAA's online procedure (i.e., signature generation) to its offline procedure (i.e., acquisition of keys/credentials). We have conducted a thorough analysis of LASER's performance-related features and compared the findings to the prior art. We have also conducted a comprehensive evaluation of LASER by implementing it on a laptop platform with an on-board TPM. To the best of our knowledge, the results presented in this paper represent the first implementation and analysis of a scheme using an actual TPM cryptoprocessor that is compliant with the most recent TPM specification version 2.0. We have thoroughly analyzed the security of LASER in the random oracle model.
          
  Phillipp Schoppmann, Adrià Gascón, Borja Balle
      Privacy-preserving data analysis in the context of federated databases distributed across multiple parties has the potential to produce richer and more accurate models than what each party can learn with their own data. Secure Multi-Party Computation (MPC) offers a robust cryptographic approach to this problem, and in fact several protocols have been proposed for various learning tasks on parametric models. In this paper we focus on $k$-NN, shifting the attention towards non-parametric models. We tackle several challenges arising in privacy-preserving $k$-NN classification on federated databases, and implement a concrete protocol for document classification. Our solution is faster than the state-of-the-art custom MPC protocol by at least one an order of magnitude.
          
  Ke Gu, Na Wu
      Currently several traceable (or linkable) identity-based ring signature schemes have been proposed. However, most of them are constructed in the random oracle model. In this paper, we present a fully traceable ring signature (TRS) scheme without random oracles, which has the constant size signature and a security reduction to the computational Diffie-Hellman (CDH) assumption. Also, we give a formal security model for traceable ring signature and prove that the proposed scheme has the properties of traceability and anonymity.
          
  25 March 2018
Atanu Basu , Indranil Sengupta
      This paper presents a secure cloud storage scheme based on hybrid cryptosystem, which consists of Elliptic Curve Cryptography (ECC),
Advanced Encryption Standard (AES), and one-way hash function. Here, the data owner exports large volume of encrypted data to a cloud 
storage provider. The exported encrypted data is over-encrypted by the cloud storage provider, and the data is sent to the requesting user. An existing hybrid cryptosystem based dynamic key management scheme with hierarchical access control has been incorporated in our
scheme. The key management scheme groups users in various security classes, and helps to derive efficiently, as well as directly 
the secret keys of the lower order security classes. The incorporated key management scheme in our proposed scheme incurs low computational, communication, and storage overheads for key generation, and derivation purposes. The security analysis, and the 
simulation results run on the AVISPA tool (formal security verification tool) show that the proposed scheme is protected from the adversaries. This scheme is useful in `owner-write-users-read' application areas, and the end users may use resource-constrained 
wireless mobile devices securely in this proposed scheme.
          
  Björn Haase, Benoît Labrique
      Increasingly connectivity becomes integrated in products and devices that previously
operated in a stand-alone setting. This observation holds for many consumer ap-
plications in the so-called "Internet of Things" (IoT) as well as for corresponding
industry applications (IIoT), such as industrial process sensors. Often the only
practicable means for authentication of human users is a weak password. The security
of password-based authentication schemes frequently form the weakest point of the
security infrastructure.
In this paper we first expose, why a tailored protocol designed for the IIoT use case is
considered necessary. The differences between IIoT and to the conventional Internet
use-cases result in largely modified threats and require special procedures for allowing
both, convenient and secure use in the highly constrained industrial setting.
Specifically the use of a verifier-based password-authenticated key-exchange (V-PAKE)
protocol as a hedge against public-key-infrastructure (PKI) failures is considered
important. Availability concerns for the case of failures of (part of) the communication
infrastructure makes local storage of access credentials mandatory. The larger threat
of physical attacks make it important to use memory-hard password hashing.
This paper presents a corresponding tailored protocol AuCPace together with a
security proof within the Universal Composability (UC) framework considering fully
adaptive adversaries. We also introduce a new security notion of partially augmented
PAKE that provides specific performance advantages and allows, thus, for suitability
for a larger set of IIoT applications.
We also present an actual instantiation of our protocol, AuCPace25519, and present
performance results on ARM Cortex-M0 and Cortex-M4 microcontrollers. Our
implementation realizes new speed-records for PAKE and X25519 Diffie-Hellman for
the ARM Cortex M4 architecture.
          
  24 March 2018
Derby, U.K., 14 September - 15 September 2018
      Event date: 14 September to 15 September 2018
Submission deadline: 13 April 2018
          
  Submission deadline: 13 April 2018
23 March 2018
Iraklis Symeonidis, Gergely Biczók, Fatemeh Shirazi, Cristina Pérez-Solà, Jessica Schroers, Bart Preneel
      Third-party applications on Facebook can collect personal data of the users who install them, but also of their friends. This raises serious privacy issues as these friends are not notified by the applications nor by Facebook, and they have not given consent. This paper presents a detailed multi-faceted study of the collateral information collection of the applications on Facebook. To investigate the views of the users, we designed a questionnaire and collected the responses of 114 participants. The results show that participants are concerned about the collateral information collection and in particular about the lack of notification and of mechanisms to control the data collection. Based on real data, we compute the likelihood of collateral information collection affecting users: we show that the probability is significant and greater than 80% for popular applications such as TripAdvisor. We also demonstrate that a substantial amount of profile data can be collected by applications, which enables application providers to profile users. To investigate whether collateral information collection is an issue to users privacy we analysed the legal framework in light of the new General Data Protection Regulation. We provide a detailed analysis of the entities involved and investigate which entity is accountable for the collateral information collection. To provide countermeasures, we propose a privacy dashboard extension that implements privacy scoring computations to enhance transparency towards collateral information collection. Furthermore, we discuss alternative solutions highlighting other countermeasures such as notification and access control mechanisms, cryptographic solutions and application auditing. To the best of our knowledge, this is the first work that provides a detailed multi-faceted study of this problem and that analyses the threat of user profiling by application providers.
          
  Qichun Wang
      It is known that correlation-immune (CI) Boolean functions used in the framework of side channel attacks need to have low Hamming weights. In 2013, Bhasin et al. studied the minimum Hamming weight of $d$-CI Boolean functions, and presented an open problem: the minimal weight of a $d$-CI function in $n$ variables might not increase with $n$. Very recently, Carlet and Chen proposed some constructions of low-weight CI functions, and gave a conjecture on the minimum Hamming weight of $3$-CI functions in $n$ variables.
In this paper, we determine the values of the minimum Hamming weights of $d$-CI Boolean functions in $n$ variables for infinitely many $n$'s and give a negative answer to the open problem proposed by Bhasin et al. We then present a method to construct minimum-weight 2-CI functions through Hadamard matrices, which can provide all minimum-weight 2-CI functions in $4k-1$ variables. Furthermore, we prove that the Carlet-Chen conjecture is equivalent to the famous Hadamard conjecture. Most notably, we propose an efficient method to construct low-weight $n$-variable CI functions through $d$-linearly independent sets, which can provide numerous minimum-weight $d$-CI functions. Particularly, we obtain some new values of the minimum Hamming weights of $d$-CI functions in $n$ variables for $n\leq 13$. We conjecture that the functions constructed by us are of the minimum Hamming weights if the sets are of absolute maximum $d$-linearly independent. If our conjecture holds, then all the values for $n\leq 13$ and most values for general $n$ are determined.
  In this paper, we determine the values of the minimum Hamming weights of $d$-CI Boolean functions in $n$ variables for infinitely many $n$'s and give a negative answer to the open problem proposed by Bhasin et al. We then present a method to construct minimum-weight 2-CI functions through Hadamard matrices, which can provide all minimum-weight 2-CI functions in $4k-1$ variables. Furthermore, we prove that the Carlet-Chen conjecture is equivalent to the famous Hadamard conjecture. Most notably, we propose an efficient method to construct low-weight $n$-variable CI functions through $d$-linearly independent sets, which can provide numerous minimum-weight $d$-CI functions. Particularly, we obtain some new values of the minimum Hamming weights of $d$-CI functions in $n$ variables for $n\leq 13$. We conjecture that the functions constructed by us are of the minimum Hamming weights if the sets are of absolute maximum $d$-linearly independent. If our conjecture holds, then all the values for $n\leq 13$ and most values for general $n$ are determined.
Gizem S. \c{C}etin, Berk Sunar
      In this paper we propose a rank based algorithm for sorting encrypted data using monomials. Greedy Sort is a sorting technique that achieves to minimize the depth of the homomorphic evaluations. It is a costly algorithm due to excessive ciphertext multiplications and its implementation is cumbersome. Another method Direct Sort has a slightly deeper circuit than Greedy Sort, nevertheless it is simpler to implement and  scales better with the size of the input array. Our proposed method minimizes both the circuit depth and the number of ciphertext multiplications. In addition to its performance, its simple design makes it more favorable compared to the alternative methods which are hard to parallelize, e.g. not suitable for fast GPU implementations. Furthermore, we improve the performance of homomorphic sorting algorithm by adapting the SIMD operations alongside message slot rotation techniques. This method allow us to pack $N$ integers into a single ciphertext and compute $N$ comparisons at once, thus reducing $\mathcal{O}(N^2)$ comparisons to $\mathcal{O}(N)$.
          
  Jason LeGrow, David Jao, Reza Azarderakhsh
      We propose a security model for authenticated key establishment in the quantum setting. Our model is the first for authenticated key establishment that allows for quantum superpositions of queries. The model builds on the classical Canetti-Krawczyk model but allows quantum interactions between the adversary and quantum oracles that emulate classical parties. We demonstrate that this new security definition is satisfiable by giving a generic construction from simpler cryptographic primitives and a specific protocol which is secure in the quantum random oracle model, under the supersingular isogeny decisional Diffie-Hellman assumption (SIDH).
          
  22 March 2018
Saikrishna Badrinarayanan, Dakshita Khurana, Amit Sahai, Brent Waters
      The notion of Functional Encryption (FE) has recently emerged as a strong primitive with several exciting applications. In this work, we initiate the study of the following question: Can existing public key encryption schemes be ``upgraded'' to Functional Encryption schemes without changing their public keys or the encryption algorithm? We call a public-key encryption with this property to be FE-compatible.
Indeed, assuming ideal obfuscation, it is easy to see that every CCA-secure public-key encryption scheme is FE-compatible. Despite the recent success in using indistinguishability obfuscation to replace ideal obfuscation for many applications, we show that this phenomenon most likely will not apply here. We show that assuming fully homomorphic encryption and the learning with errors (LWE) assumption, there exists a CCA-secure encryption scheme that is provably not FE-compatible. We also show that a large class of natural CCA-secure encryption schemes proven secure in the random oracle model are not FE-compatible in the random oracle model.
Nevertheless, we identify a key structure that, if present, is sufficient to provide FE-compatibility. Specifically, we show that assuming sub-exponentially secure iO and sub-exponentially secure one way functions, there exists a class of public key encryption schemes which we call Special-CCA secure encryption schemes that are in fact, FE-compatible.
In particular, each of the following popular CCA secure encryption schemes (some of which existed even before the notion of FE was introduced) fall into the class of Special-CCA secure encryption schemes and are thus FE-compatible:
1) The scheme of Canetti, Halevi and Katz (Eurocrypt 2004) when instantiated with the IBE scheme of Boneh-Boyen (Eurocrypt 2004). 2) The scheme of Canetti, Halevi and Katz (Eurocrypt 2004) when instantiated with any Hierarchical IBE scheme. 3) The scheme of Peikert and Waters (STOC 2008) when instantiated with any Lossy Trapdoor Function.
  Indeed, assuming ideal obfuscation, it is easy to see that every CCA-secure public-key encryption scheme is FE-compatible. Despite the recent success in using indistinguishability obfuscation to replace ideal obfuscation for many applications, we show that this phenomenon most likely will not apply here. We show that assuming fully homomorphic encryption and the learning with errors (LWE) assumption, there exists a CCA-secure encryption scheme that is provably not FE-compatible. We also show that a large class of natural CCA-secure encryption schemes proven secure in the random oracle model are not FE-compatible in the random oracle model.
Nevertheless, we identify a key structure that, if present, is sufficient to provide FE-compatibility. Specifically, we show that assuming sub-exponentially secure iO and sub-exponentially secure one way functions, there exists a class of public key encryption schemes which we call Special-CCA secure encryption schemes that are in fact, FE-compatible.
In particular, each of the following popular CCA secure encryption schemes (some of which existed even before the notion of FE was introduced) fall into the class of Special-CCA secure encryption schemes and are thus FE-compatible:
1) The scheme of Canetti, Halevi and Katz (Eurocrypt 2004) when instantiated with the IBE scheme of Boneh-Boyen (Eurocrypt 2004). 2) The scheme of Canetti, Halevi and Katz (Eurocrypt 2004) when instantiated with any Hierarchical IBE scheme. 3) The scheme of Peikert and Waters (STOC 2008) when instantiated with any Lossy Trapdoor Function.
Jens Groth, Markulf Kohlweiss, Mary Maller, Sarah Meiklejohn , Ian Miers
      By design,  existing (pre-processing) 
 zk-SNARKs embed a secret trapdoor in a relation-dependent common reference strings (CRS). The trapdoor is exploited
by a (hypothetical) simulator to prove the scheme is zero knowledge, and the secret-dependent
structure facilitates a linear-size CRS and linear-time prover computation.
If known by a real party, however, the trapdoor can be used to subvert the
security of the system.  The structured CRS that makes zk-SNARKs practical
also makes deploying zk-SNARKS problematic, as it is difficult to argue why the
trapdoor would not be available to the entity responsible for generating the
CRS. Moreover, for pre-processing zk-SNARKs a new trusted CRS needs to be
computed every time the relation is changed.
%
In this paper, we address both issues by proposing a model where a number of users can update a universal CRS. The updatable CRS model guarantees security if at least one of the users updating the CRS is honest. We provide both a negative result, by showing that zk-SNARKs with private secret-dependent polynomials in the CRS cannot be updatable, and a positive result by constructing a zk-SNARK based on a CRS consisting only of secret-dependent monomials. The CRS is of quadratic size, is updatable, and is universal in the sense that it can be specialized into one or more relation-dependent CRS of linear size with linear-time prover computation.
  In this paper, we address both issues by proposing a model where a number of users can update a universal CRS. The updatable CRS model guarantees security if at least one of the users updating the CRS is honest. We provide both a negative result, by showing that zk-SNARKs with private secret-dependent polynomials in the CRS cannot be updatable, and a positive result by constructing a zk-SNARK based on a CRS consisting only of secret-dependent monomials. The CRS is of quadratic size, is updatable, and is universal in the sense that it can be specialized into one or more relation-dependent CRS of linear size with linear-time prover computation.
Zvika Brakerski, Vadim Lyubashevsky, Vinod Vaikuntanathan, Daniel Wichs
      We present a worst case decoding problem whose hardness reduces to that of solving the Learning Parity with Noise (LPN) problem, in some parameter regime. Prior to this work, no worst case hardness result was known for LPN (as opposed to syntactically similar problems such as Learning with Errors). The caveat is that this worst case problem is only mildly hard and in particular admits a quasi-polynomial time algorithm, whereas the LPN variant used in the reduction requires extremely high noise rate of $1/2-1/poly(n)$. Thus we can only show that ``very hard'' LPN is harder than some ``very mildly hard'' worst case problem.
Specifically, we consider the (n,m,w)-nearest codeword problem ($(n,m,w)$-NCP) which takes as input a generating matrix for a binary linear code in $m$ dimensions and rank $n$, and a target vector which is very close to the code (Hamming distance at most $w$), and asks to find the codeword nearest to the target vector. We show that for balanced (unbiased) codes and for relative error $w/m \approx {\log^2 n}/{n}$, $(n,m,w)$-NCP can be solved given oracle access to an LPN distinguisher with noise ratio $1/2-1/poly(n)$.
Our proof relies on a smoothing lemma for codes which we show to have further implications: We show that $(n,m,w)$-NCP with the aforementioned parameters lies in the complexity class $\text{Search-BPP}^{SZK}$ (i.e. reducible to a problem that has a statistical zero knowledge protocol) implying that it is unlikely to be $\mathcal{NP}$-hard. We then show that LPN with very low noise rate $\log^2(n)/n$ implies the existence of collision resistant hash functions (our aforementioned result implies that in this parameter regime LPN is also in $\text{BPP}^\text{SZK})$.
  Specifically, we consider the (n,m,w)-nearest codeword problem ($(n,m,w)$-NCP) which takes as input a generating matrix for a binary linear code in $m$ dimensions and rank $n$, and a target vector which is very close to the code (Hamming distance at most $w$), and asks to find the codeword nearest to the target vector. We show that for balanced (unbiased) codes and for relative error $w/m \approx {\log^2 n}/{n}$, $(n,m,w)$-NCP can be solved given oracle access to an LPN distinguisher with noise ratio $1/2-1/poly(n)$.
Our proof relies on a smoothing lemma for codes which we show to have further implications: We show that $(n,m,w)$-NCP with the aforementioned parameters lies in the complexity class $\text{Search-BPP}^{SZK}$ (i.e. reducible to a problem that has a statistical zero knowledge protocol) implying that it is unlikely to be $\mathcal{NP}$-hard. We then show that LPN with very low noise rate $\log^2(n)/n$ implies the existence of collision resistant hash functions (our aforementioned result implies that in this parameter regime LPN is also in $\text{BPP}^\text{SZK})$.
Rémi Géraud, David Naccache
      In this work we explore a combinatorial optimization problem stemming from the Naccache-Stern cryptosystem. We show that solving this problem results in bandwidth improvements, and suggest
a polynomial-time approximation algorithm to find an optimal solution. 
Our work suggests that using optimal radix encoding results in an asymptotic 50% increase in bandwidth.
  Our work suggests that using optimal radix encoding results in an asymptotic 50% increase in bandwidth.
Sebastian Meiser
      This technical report discusses three subtleties related to the widely used notion of differential privacy (DP). First, we discuss how the choice of a distinguisher influences the privacy notion and why we should always have a distinguisher if we consider approximate DP. Secondly, we draw a line between the very intuitive probabilistic differential privacy (with probability $1-\delta$ we have $\varepsilon$-DP) and the commonly used approximate differential privacy ($(\varepsilon,\delta)$-DP). Finally we see that and why probabilistic differential privacy (and similar notions) are not complete under post-processing, which has significant implications for notions used in the literature.
          
  Mark Zhandry
      The quantum random oracle model (QROM) has become the standard model in which to prove the post-quantum security of random-oracle-based constructions.  Unfortunately, none of the known proof techniques allow the reduction to record information about the adversary's queries, a crucial feature of many classical ROM proofs, including all proofs of indifferentiability for hash function domain extension.
In this work, we give a new QROM proof technique that overcomes this ``recording barrier''. Our central observation is that when viewing the adversary's query and the oracle itself in the Fourier domain, an oracle query switches from writing to the adversary's space to writing to the oracle itself. This allows a reduction to simulate the oracle by simply recording information about the adversary's query in the Fourier domain.
We then use this new technique to show the indifferentiability of the Merkle-Damgard domain extender for hash functions. Given the threat posed by quantum computers and the push toward quantum-resistant cryptosystems, our work represents an important tool for efficient post-quantum cryptosystems.
  In this work, we give a new QROM proof technique that overcomes this ``recording barrier''. Our central observation is that when viewing the adversary's query and the oracle itself in the Fourier domain, an oracle query switches from writing to the adversary's space to writing to the oracle itself. This allows a reduction to simulate the oracle by simply recording information about the adversary's query in the Fourier domain.
We then use this new technique to show the indifferentiability of the Merkle-Damgard domain extender for hash functions. Given the threat posed by quantum computers and the push toward quantum-resistant cryptosystems, our work represents an important tool for efficient post-quantum cryptosystems.
Rosario Gennaro, Michele Minelli, Anca Nitulescu, Michele Orrù
      Zero-knowledge SNARKs (zk-SNARKs) are non-interactive proof systems with short (i.e., independent of the size of the witness) and efficiently verifiable proofs. They elegantly resolve the juxtaposition of individual privacy and public trust, by providing an efficient way of demonstrating knowledge of secret information without actually revealing
it. To this day, zk-SNARKs are widely deployed all over the planet and are used to keep alive a system worth billion of euros, namely the cryptocurrency Zcash. However, all current SNARKs implementations rely on so-called pre-quantum assumptions and, for this reason, are not expected to withstand cryptanalitic efforts over the next few decades.
In this work, we introduce a new zk-SNARK that can be instantiated from lattice-based assumptions, and which is thus believed to be post-quantum secure. We provide a generalization in the spirit of Gennaro et al. (Eurocrypt13) to the SSP-based SNARK of Danezis et al. (Asiacrypt14) that relies on weaker computational assumptions. We focus on designated-verifier proofs and propose a protocol in which a proof consists of just 5 LWE encodings. We provide a concrete choice of parameters, showing that our construction is practically instantiable.
  In this work, we introduce a new zk-SNARK that can be instantiated from lattice-based assumptions, and which is thus believed to be post-quantum secure. We provide a generalization in the spirit of Gennaro et al. (Eurocrypt13) to the SSP-based SNARK of Danezis et al. (Asiacrypt14) that relies on weaker computational assumptions. We focus on designated-verifier proofs and propose a protocol in which a proof consists of just 5 LWE encodings. We provide a concrete choice of parameters, showing that our construction is practically instantiable.
Rachid El Bansarkhani, Rafael Misoczki
      Hash-based signature schemes are the most promising cryptosystem candidates in a post-quantum world, but offer little structure to enable more sophisticated constructions such as group signatures. 
Group signatures allow a group member to anonymously sign messages on behalf of the whole group (as needed for anonymous remote attestation). 
In this work, we introduce G-Merkle, the first (stateful) hash-based group signature scheme. 
Our proposal relies on minimal assumptions, namely the existence of one-way functions, and offers performance equivalent to the Merkle single-signer setting. The public key size (as small as in the single-signer setting)  outperforms all other post-quantum group signatures. Moreover, for $N$ group members issuing at most $B$ signatures each, the size of a hash-based group signature is just as large as a Merkle signature with a tree composed by $N\cdot B$ leaf nodes. This directly translates into fast signing and verification engines.  
Different from lattice-based counterparts, our construction does not require any random oracle. Note that due to the randomized structure of our Merkle tree, the signature authentication paths are pre-stored or deduced from a public tree, which seems a requirement hard to circumvent. To conclude, we present implementation results to demonstrate the practicality of our proposal.
          
  Prabhanjan Ananth, Xiong Fan
      Attribute based encryption (ABE) is an advanced encryption system with a built-in mechanism to generate keys associated with functions which in turn provide restricted access to encrypted data. Most of the known candidates of attribute based encryption model the functions as circuits. This results in significant efficiency bottlenecks, especially in the setting when the function, associated with the ABE key, admits a RAM program whose runtime is sublinear in the length of the attribute. In this work we study the notion of attribute based encryption for random access machines (RAMs), introduced in the work of Goldwasser, Kalai, Popa, Vaikuntanathan and Zeldovich (Crypto 2013). We present a construction of attribute based encryption for RAMs satisfying sublinear decryption complexity assuming learning with errors. This improves upon the work of Goldwasser et al., who achieved this result based on SNARKs and extractable witness encryption.
En route to constructing this primitive, we introduce the notion of controlled homomorphic recoding (CHR) schemes. We present a generic transformation from controlled homomorphic recoding schemes to attribute-based encryption for RAMs and then we show how to instantiate controlled homomorphic recoding schemes based on learning with errors.
  En route to constructing this primitive, we introduce the notion of controlled homomorphic recoding (CHR) schemes. We present a generic transformation from controlled homomorphic recoding schemes to attribute-based encryption for RAMs and then we show how to instantiate controlled homomorphic recoding schemes based on learning with errors.
