International Association for Cryptologic Research

# IACR News Central

Get an update on changes of the IACR web-page here. For questions, contact newsletter (at) iacr.org. You can also get this service via

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

2013-05-28
05:22 [Pub][ePrint]

Computational privacy is a property of cryptographic

system that ensures the privacy of data (and/or operations)

while being processed at an untrusted server. Cryptography

has been an indispensable tool for computer security but its

readiness for this new generational shift of computing platform

i.e. Cloud Computing is still questionable.

Theoretical constructions like Fully Homomorphic Encryption,

Functional encryption, Server aided Multiparty Computation,

Verifiable Computation, Instance Hiding etc. are few

directions being pursued. These cryptographic techniques solve

Cloud privacy problems at different levels but most of them dont

fit well in overall scheme of things.

We state the privacy requirements for Cloud offerings in

various delivery methods. We discuss the challenges with current

cryptographic techniques being pursued by researchers and show

that they dont cater to blanket cover these privacy requirements.

We urge the need to find generalizations and connections

among these isolated techniques. As this might give more insights

into the underpinnings of Computational Privacy and lead to

better solutions.

05:22 [Pub][ePrint]

The incidence matrix between a set of monomials and a set of vectors in $\\F_2$ has a great importance in the study of coding theory, cryptography, linear algebra, combinatorics. The rank of these matrices are very useful while computing algebraic immunity($\\ai$) of Boolean functions in cryptography literature~\\cite{MPC04,DGM04}.

Moreover, these matrices are very sparse and well structured. Thus, for aesthetic reason finding rank of these matrices is also very interesting in mathematics.

In this paper, we have reviewed the existing algorithms with added techniques to speed up the algorithms and have proposed some new efficient algorithms for the computation of the rank of incidence matrix and solving the system of equations where the co-efficient matrix is an incidence matrix.Permuting the rows and columns of the incidence matrix with respect to an ordering, the incidence matrix can be converted to a lower block triangular matrix, which makes the computation in quadratic time complexity and linear space complexity. Same technique is used to check and computing low degree annihilators of an $n$-variable Boolean functions in faster time complexity than the usual algorithms. Moreover, same technique is also exploited on the Dalai-Maitra algorithm in~\\cite{DM06} for faster computation. On the basis of experiments, we conjecture that the $\\ai$ of $n$-variable inverse S-box is $\\lfloor\\sqrt{n}\\rfloor + \\lceil\\frac{n}{\\lfloor\\sqrt{n}\\rfloor}\\rceil-2$.

We have also shown the skepticism on the existing fastest algorithm

in~\\cite{ACGKMR06} to find $\\ai$ and lowest degree annihilators of a Boolean function.

05:22 [Pub][ePrint]

The goal of a profiling attack is to challenge the security of a cryptographic device in the worst case scenario. Though template attack are reputed as the strongest power analysis attack, they effectiveness is strongly dependent on the validity of the Gaussian assumption. This led recently to the appearance of nonparametric approaches, often based on machine learning strategies. Though these approaches outperform template attack, they tend to neglect the time series nature of the power traces. In this paper, we propose an original multi-class profiling attack that takes into account the temporal dependence of power traces. The experimental study shows that the time series analysis approach is competitive and often better than static classification alternatives.

05:22 [Pub][ePrint]

The security of most Internet applications relies on underlying public key infrastructures (PKIs) and thus on an ecosystem of certification authorities (CAs). The pool of PKIs responsible for the issuance and the maintenance of SSL certificates, called the Web PKI, has grown extremely large and complex. Herein, each CA is a single point of failure for the security, leading to an attack surface, the size of which is hardly assessable.

This paper approaches the issue if and how the attack surface can be reduced in order to reduce the risk of relying on a malicious certificate. In particular we consider the individualization of the set of trusted CAs. We present a tool called Rootopia, which allows to assess the respective part of the Web PKI relevant for a user.

Our analysis of browser histories of 22 Internet users reveals, that the major part of the PKI is completely irrelevant to a single user. The attack surface can be reduced by more than 90%, which shows the potential of the individualization of the set of trusted CAs. Furthermore, all the relevant CAs reside within a small set of countries. Our findings confirm, that we unnecessarily trust in a

huge number of CAs, exposing ourselves to unnecessary risks.

05:22 [Pub][ePrint]

Mixnets are one of the main approaches to deploy secret and verifiable electronic elections.

General-purpose verifiable mixnets however suffer from the drawback that the amount of data to be verified by observers increases linearly with the number of involved mix nodes, the number of decryptors, and the number of voters. Chase et al. proposed a verifiable mixnet at Eurocrypt 2012 based on so-called \\emph{malleable proofs} - proofs that do not increase with the number of mix nodes. In work published at PKC 2013, the same authors adapted malleable proofs to verifiable distributed decryption, resulting in a cryptographic voting scheme. As a result, the amount of data to be verified only increases linearly with the number of voters.

However, their scheme leaves several questions open which we address in this paper:

As a first contribution, we adapt a multi-party computation protocol to build a distributed key generation protocol for the encryption scheme underlying their voting scheme. As a second contribution, we decompress their abstract scheme description, identify elementary operations, and count the number of such operations required for mixing and verification. Based on timings for elementary operations, we extrapolate the running times of the mixing and verification processes, allowing us to assess the feasibility of their scheme. For the German case, we conclude that the replacement of postal voting by cryptographic voting based on malleable proofs is feasible on an electoral district level.

05:22 [Pub][ePrint]

The enforcement of security policies in outsourced environments is still an open challenge for policy-based systems. On the one hand, taking the appropriate security decision requires access to the policies. However, if such access is allowed in an untrusted environment then confidential information might be leaked by the policies. Current solutions are based on cryptographic operations that embed security policies with the security mechanism. Therefore, the enforcement of such policies is performed by allowing the authorised parties to access the appropriate keys. We believe that such solutions are far too rigid because they strictly intertwine authorisation policies with the enforcing mechanism.

In this paper, we want to address the issue of enforcing security policies in an untrusted environment while protecting the policy confidentiality. Our solution ESPOON is aiming at providing a clear separation between security policies and the enforcement mechanism. However, the enforcement mechanism should learn as less as possible about both the policies and the requester attributes.

05:22 [Pub][ePrint]

This paper introduces a leakage model in the frequency domain to

enhance the efficiency of Side Channel Attacks of CMOS circuits. While usual techniques are focused on noise removal around clock harmonics, we show that the actual leakage is not necessary located in those expected bandwidths as experimentally observed by E. Mateos and C.H. Gebotys in 2010. We start by building a theoretical modeling of power consumption and electromagnetic emanations before deriving from it a criterion to guide standard attacks. This criterion is then validated on real experiments, both on FPGA and ASIC, that show an impressive increase of the yield of SCA.

05:22 [Pub][ePrint]

To instill greater confidence in computations outsourced to the cloud, clients should be able to verify the correctness of the results returned. To this end, we introduce Pinocchio, a built system for efficiently verifying general computations while relying only on cryptographic assumptions. With Pinocchio, the client creates a public evaluation key to describe her computation; this setup is proportional to evaluating the computation once. The worker then evaluates the computation on a particular input and uses the evaluation key to produce a proof of correctness. The proof is only 288 bytes, regardless of the computation performed or the size of the inputs and outputs. Anyone can use a public verification key to check the proof. Pinocchio achieves strong asymptotic efficiency by refining the Quadratic Arithmetic Programs of Gennaro, Gentry, Parno, and Raykova (EuroCrypt 2013).

Crucially, our evaluation on seven applications demonstrates that Pinocchio is efficient in practice too. Pinocchio\'s verification time is typically 10ms: 5-7 orders of magnitude less than previous work; indeed Pinocchio is the first general-purpose system to demonstrate per-instance verification cheaper than native execution (for some apps). Pinocchio also reduces the worker\'s proof effort by an additional 19-60x. As an additional feature, Pinocchio generalizes to zero-knowledge proofs at a negligible cost over the base protocol. Finally, to aid development, Pinocchio provides an end-to-end toolchain that compiles a subset of C into programs that implement the verifiable computation protocol.

05:22 [Pub][ePrint]

We present Path ORAM, an extremely simple Oblivious RAM protocol with a small amount of client storage. Partly due to its simplicity, Path ORAM is the most practical ORAM scheme known to date. We formally prove that Path ORAM requires O(log^2 N / k) bandwidth overhead for block size B = k * log N. For block sizes bigger than O(log^2 N) bits, Path ORAM is asymptotically better than the best known ORAM scheme with small client storage. Due to its practicality, Path ORAM has been adopted in the design of secure processors since its proposal.

05:22 [Pub][ePrint]

Basing signature schemes on strong lattice problems has been a long standing open issue. Today, two families of lattice-based signature schemes are known: the ones based on the hash-and-sign construction of Gentry et al.; and Lyubashevsky\'s schemes, which are based on the Fiat-Shamir framework.

In this paper we show for the first time how to adapt the schemes of Lyubashevsky to the ring signature setting. In particular we transform the scheme of ASIACRYPT 2009 into a ring signature scheme that provides strong properties of security under the random oracle model. Anonymity is ensured in the sense that signatures of different users are within negligible statistical distance even under full key exposure. In fact, the scheme satisfies a notion which is stronger than the classical full key exposure setting as even if the keypair of the signing user is adversarially chosen, the statistical distance between signatures of different users remains negligible.

Considering unforgeability, the best lattice-based ring signature schemes provide either unforgeability against arbitrary chosen subring attacks or insider corruption in log-sized rings. In this paper we present two variants of our scheme. In the basic one, unforgeability is ensured in those two settings. Increasing signature and key sizes by a factor k (typically 80 − 100), we provide a variant in which unforgeability is ensured against insider corruption attacks for arbitrary rings. The technique used is pretty general and can be adapted to other existing schemes.

05:22 [Pub][ePrint]

With increasing usage of hardware accelerators in modern heterogeneous

System-on-Chips (SoCs), the distinction between hardware and software is no longer rigid. The domain of cryptography is no exception and efficient hardware design of so-called software ciphers are becoming increasingly popular. In this paper, for the first time we propose an efficient hardware accelerator design for SOSEMANUK, one of the finalists of the eSTREAM stream cipher competition in the software category. Since SOSEMANUK combines the design principles of the block cipher Serpent and the stream cipher SNOW 2.0, we make our design

flexible to accommodate the option for independent execution of Serpent and SNOW 2.0. In the process, we identify interesting design points and explore different levels of optimizations. We perform a detailed experimental evaluation of the performance figures of each design point and in each case our figures by far outperform the existing benchmarks. The best throughput achieved by the combined design is 67.84 Gbps for SOSEMANUK, 33.92 Gbps for SNOW 2.0 and 2.12 Gbps for Serpent. The throughput for SOSEMANUK by far outperforms all existing benchmarks on the eSTREAM candidates.