International Association for Cryptologic Research

# IACR News Central

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

2013-10-28
21:17 [Pub][ePrint]

Oblivious RAM (ORAM) has recently attracted a lot of interest since

it can be used to protect the privacy of data user\'s data access pattern from (honest but curious) outsourced storage. This is

achieved by simulating each original data read or write operation with some read and write operations on some real and dummy data items. This paper proposes two single-server write-only ORAM schemes and one multi-server write-only ORAM scheme, which simulate only the write operations and protect only the write pattern. The reduction of functions however allows to build much simpler and efficient (in terms of communication cost and storage usage) write-only ORAMs. Write-only ORAM can be used in conjunction with Private Information Retrieval (PIR), which is a technique to protect data user\'s read patterns, in order to protect both write and read patterns. Write-only ORAM may be used alone too, when only write patterns need protection. We study two usage scenarios: (i) data publishing/sharing: where a data owner shares the data with others, who only consume the published information. Data consumers should not have write access to the outsourced data, and thus cannot use ORAM to protect their read patterns in this scenario. To hide access patterns from the outsourced storage, the data owner can use ORAM to write data, and data consumers use PIR to read data. Alternatively, for some applications, a data consumer can trivially download all data once or regularly, and neither the data owner nor data consumers mind that the outsourced storage learns such read pattern. Compared with using traditional ORAM, using the simpler write-only ORAM here produces much less communication cost and/or client-side storage usage. Our single-server write-only ORAM scheme produces lower (typically one order lower) communication cost with the same client-side storage usage, or requires much less (typically at least one order less) client-side storage to achieve the same level of communication cost than the best known single-server full functional ORAM schemes do. Compared with the

best known multi-server ORAM scheme, our write-only ORAM schemes have lower (typically one order lower) communication cost, or achieve the same communication cost with the same client-side storage usage in single-server setting. (ii) the data owner\'s personal use: Our write-only ORAM schemes combined with PIR can be used as building blocks for some existing full functional ORAM schemes. This leads to the reduction of the communication costs for two full-functional ORAM schemes by the factors of $O(\\log N)$ and $O(\\sqrt{\\log N}\\times \\log\\log N)$, where $N$ is the maximum data item count. One of these resulting schemes has a communication cost of $O(l)$, where $l$ is data item length. This is typically one order lower than the previous best known ORAM scheme\'s cost, which is $O(\\log N \\times l)$. The other resulting scheme also achieves $O(\\log N \\times l)$ communication cost, but its client-side storage usage is several orders lower than the best known single-server ORAM\'s.

21:17 [Pub][ePrint]

This paper introduces a dedicated authenticated encryption algorithm AEGIS; AEGIS allows for the protection of associated data which makes it very suitable for protecting network packets. AEGIS-128L uses eight AES round functions to process a 32-byte message block (one step). AEGIS-128 uses five AES round functions to process a 16-byte message block (one step); AES-256 uses six AES round functions. The security analysis shows that these algorithms offer a high level of security. On the Intel Sandy Bridge Core i5 processor, the speed of AEGIS-128L, AEGIS-128 and AEGIS-256 is around 0.48, 0.66 and 0.7 clock cycles/byte (cpb) for 4096-byte messages, respectively. This is substantially faster than the AES CCM, GCM and OCB modes.

21:17 [Pub][ePrint]

It has become much easier to crack a password

hash with the advancements in the graphicalprocessing

unit (GPU) technology. An adversary can

recover a user\'s password using brute-force attack on

no server can detect any illegitimate user authentication

(if there is no extra mechanism used).

In this context, recently, Juels and Rivest published a

paper for improving the security of hashed passwords.

Roughly speaking, they propose an approach for user

authentication, in which some false passwords, i.e., \"honeywords\"

detect impersonation. Their solution includes an auxiliary

secure server called \"honeychecker\" which can distinguish

a user\'s real password among her honeywords and immediately

sets off an alarm whenever a honeyword is used.

In this paper, we analyze the security of the proposal and

provide some possible improvements which are easy to

implement

21:17 [Pub][ePrint]

Threshold Implementations provide provable security against first-order power analysis attacks for hardware and software implementations. Like masking, the approach relies on secret sharing but it differs in the implementation of logic functions. At \\textsc{Eurocrypt} 2011 Moradi et al. published the to date most compact Threshold Implementation of AES-128 encryption. Their work shows that the number of required random bits may be an additional evaluation criterion, next to area and speed. We present a new Threshold Implementation of AES-128 encryption that is 18\\% smaller, 7.5\\% faster and that requires 8\\% less random bits than the implementation from \\textsc{Eurocrypt} 2011. In addition, we provide results of a practical security evaluation based on real power traces in adversary-friendly conditions. They confirm the first-order attack resistance of our implementation and show good resistance against higher-order attacks.

21:17 [Pub][ePrint]

In 2012, Alagheband and Aref presented a dynamic and secure key manage

ment model for hierarchical heterogeneous sensor networks. They proposed a signcryption algorithm which is the main building block in their key management model. They proved the algorithm is as strong as the elliptical curve discrete logarithm problem. In this work,

we study the security of their signcryption algorithm. It is regretful that we found their algorithm is insecure. The adversary can impersonate the base station by sending forged messages to the cluster leaders after capturing the signcrypted messages. Hence, the key management model proposed by them is insecure. Then, we propose an improved signcryption algorithm to fix this weakness.

21:17 [Pub][ePrint]

We show that it is possible to upgrade an obfuscator for a weak complexity class $\\weak$ into an obfuscator for arbitrary polynomial size circuits, assuming that the class $\\weak$ can compute pseudorandom functions. Specifically, under standard intractability assumptions (e.g., hardness of factoring, Decisional Diffie--Hellman, or Learning with Errors), the existence of obfuscators for $\\NC^1$ or even $\\TC^0$ implies the existence of general-purpose obfuscators for $\\classP$. Previously, such a bootstrapping procedure was known to exist under the assumption that there exists a fully-homomorphic encryption whose decryption algorithm can be computed in $\\weak$. Our reduction works with respect to virtual black-box obfuscators and relativizes to ideal models.

21:17 [Pub][ePrint]

We describe a new algorithm for masking look-up tables of block-ciphers at any order, as a countermeasure against side-channel attacks. Our technique is a generalization of the classical randomized table countermeasure against first-order attacks. We prove the security of our new algorithm against t-th order attacks in the usual Ishai-Sahai-Wagner model from Crypto 2003; we also improve the bound on the number of shares from n>=4t+1 to n>= 2t+1 for an adversary who can adaptively move its probes between successive executions.

Our algorithm has the same time complexity O(n^2) as the Rivain-Prouff algorithm for AES, and its extension by Carlet et al. to any look-up table. In practice for AES our algorithm is less efficient than Rivain-Prouff, which can take advantage of the special algebraic structure

of the AES Sbox; however for DES our algorithm performs slightly better.

21:17 [Pub][ePrint]

We show that if there exist indistinguishability obfuscators for a certain class C of circuits then there do not exist independent-auxiliary-input virtual-black-box (VBB) obfuscators for any family of circuits that compute a pseudo-entropic function. A function f_k is pseudo-entropic if it is hard, given oracle access to f_k but without asking explicitly on a value x, to distinguish f_k(x) from a random variable with some real entropy.

This strengthens the bound of Goldwasser and Kalai [FOCS 05, ePrint 13] that rules out dependent-auxiliary-input VBB obfuscation for the same set of circuit families, assuming inditinguishability obfuscators for another class, C\', of circuits. That is, while they only rule out the case where the adversary and the simulator obtain auxiliary information that depends on the actual (secret) obfuscated function, we rule out even the case where the auxiliary input depends only

on the (public) family of programs.

21:17 [Pub][ePrint]

Non-malleable codes, defined by Dziembowski, Pietrzak and Wichs (ICS \'10), provide roughly the following guarantee: if a codeword $c$ encoding some message $x$ is tampered to $c\' = f(c)$ such that $c\' \\neq c$, then the tampered message $x\'$ contained in $c\'$ reveals no information about $x$. Non-malleable codes have applications to immunizing cryptosystems against tampering attacks and related-key attacks.

One cannot have an efficient non-malleable code that protects against all efficient tampering functions $f$. However, in this work we show the next best thing\'\': for any polynomial bound $s$ given a-priori, there is an efficient non-malleable code that protects against all tampering functions $f$ computable by a circuit of size $s$. More generally, for any family of tampering functions $\\F$ of size $|\\F| \\leq 2^{s}$, there is an efficient non-malleable code that protects against all $f \\in \\F$. The rate of our codes, defined as the ratio of message to codeword size, approaches $1$. Our results are information-theoretic and our main proof technique relies on a careful probabilistic method argument using limited independence. As a result, we get an efficiently samplable family of efficient codes, such that a random member of the family is non-malleable with overwhelming probability. Alternatively, we can view the result as providing an efficient non-malleable code in the common reference string\'\' (CRS) model.

We also introduce a new notion of non-malleable key derivation, which uses randomness $x$ to derive a secret key $y = h(x)$ in such a way that, even if $x$ is tampered to a different value $x\' = f(x)$, the derived key $y\' = h(x\')$ does not reveal any information about $y$. Our results for non-malleable key derivation are analogous to those for non-malleable codes.

As a useful tool in our analysis, we rely on the notion of leakage-resilient storage\'\' of Davi, Dziembowski and Venturi (SCN \'10) and, as a result of independent interest, we also significantly improve on the parameters of such schemes.

2013-10-24
18:17 [Pub][ePrint]

Bideniable encryption allows both sender and receiver in a public-key setting to simultaneously claim that a different message of their choice was transmitted, and support this claim by a good-looking encryption and key-generation randomness, respectively. A weaker version with two variants of algorithms is called flexible or multi-distributional deniability, a stronger one-algorithm version is called full deniability. Bendlin et al. (ASIACRYPT 2011) showed that certain kinds of fully-deniable schemes are constructible using corresponding flexible schemes. In this paper, we show that their construction in the bideniable case has a flaw, and we provide a fixed scheme. Our construction relies on a deterministic subset matching algorithm that assigns to each nonempty subset of a finite set its proper subset reduced by exactly one element. Although this algorithm is crucial in our construction, it may also be of independent interest.

18:17 [Pub][ePrint]

In this paper, we present a heuristic algorithm for solving exact, as

well as approximate, SVP and CVP for lattices. This algorithm is based

on a new approach which is very different from and complementary to the

sieving technique. This new approach frees us from the kissing number bound and allows us to solve SVP and CVP in lattices of dimension $n$ in time $2^{0.377n}$ using memory $2^{0.292n}$. The key idea is to no longer work with a single lattice but to move the problems around in a tower of related lattices. We initiate the algorithm by sampling very short vectors in a dense overlattice of the original lattice that admits a quasi-orthonormal basis and hence an efficient enumeration of vectors of bounded norm. Taking sums of vectors in the sample, we construct short vectors in the next lattice of our tower. Repeating this, we climb all the way to the top of the tower and finally obtain solution vector(s) in the initial lattice as a sum of vectors of the overlattice just below it. The complexity analysis relies on the Gaussian heuristic. This heuristic is backed by experiments in low and high dimensions that closely reflect these estimates when solving hard lattice problems in the average case.