Basic tools in lattice-based cryptography, Vinod Vaikuntanathan (IBM)

We will (a) Describe the hard average-case problems on lattices (Ajtai's subset-sum problem and Regev's learning with errors problem). Show how to achieve minicrypt from these problems. (in particular, OWF, PRG and collision-resistant hashing). (b) Describe a trapdoor structure underlying these problems (the trapdoor sampling procedure of Ajtai'99 -- as much details as time permits) Show how to achieve cryptomania using this. (in particular, trapdoor functions, efficient encryption schemes, identity-based encryption etc.)

Public-Key Cryptosystems Resilient to Key Leakage, Moni Naor (Weizmann)

Most of the work in the formal analysis of cryptographic schemes traditionally concentrated in abstract adversarial models that do not capture side-channel attacks. Such attacks exploit various forms of unintended information leakage, which is inherent to almost all physical implementations. In light of the prevalence of such attacks there are several attempts to model them and suggest schemes that are resistant to them. Inspired by recent side-channel attacks, especially the "cold boot attacks", Akavia, Goldwasser and Vaikuntanathan (TCC '09) suggested a framework for modeling the security of encryption schemes against a wide class of side-channel attacks, in which adversarially chosen functions of the secret key are leaked to the attacker. The functions may be chosen after the public key is known but there total length is limited. We revisit this framework and our main results are as follows:

* We present a generic construction of a public-key encryption scheme that is resilient to key leakage from any universal hash proof system. The construction does not rely on additional computational assumptions, and the resulting scheme is as efficient as the underlying proof system. Existing constructions of such proof systems imply that our construction can be based on a variety of number-theoretic assumptions, including the decisional Diffie-Hellman assumption (and its progressively weaker d-Linear variants), the quadratic residuosity assumption, and Paillier's composite residuosity assumption.

* We construct a new hash proof system based on the decisional Diffie-Hellman assumption (and its d-Linear variants), and show that the resulting scheme is resilient to any leakage of L(1-o(1)) bits. In addition, we prove that the recent scheme of Boneh et al. (CRYPTO '08), constructed to be a "circular-secure" encryption scheme, is resilient to any leakage of L(1-o(1)) bits. These two proposed schemes complement each other in terms of efficiency.

* We extend the framework of key leakage to the setting of chosen-ciphertext attacks. On the theoretical side, we prove that the Naor-Yung paradigm is applicable in this setting as well, and obtain as a corollary encryption schemes that are CCA2-secure with any leakage of L(1 -o(1)) bits. On the practical side, we prove that variants of the Cramer-Shoup cryptosystem (along the lines of our generic construction) are CCA1-secure with any leakage of L/4 bits, and CCA2-secure with any leakage of L/6 bits.

Joint work with Gil Segev.

Extracting Correlations, Amit Sahai (UCLA)

Randomness extractors convert a dirty source of randomness into a clean source of randomness. We consider a natural extension of randomness extraction and the related notion of privacy amplification to the case of two correlated sources. Our main result is an efficient, interactive two-party protocol which extracts $m$ clean independent instances of a given joint distribution $(X,Y)$ from $n=O(m)$ dirty (or "leaky") instances of the same distribution. The classical case corresponds to $X$ and $Y$ which are an identical random bit.

This talk will present several applications of correlation extractors to cryptography with leakage and elsewhere.

(To appear, FOCS 2009; Joint work with Yuval Ishai, Eyal Kushilevitz, and Rafail Ostrovsky.)

Protecting Circuits from Computationally-Bounded Leakage, Eran Tromer (MIT)

Building on the approach of Ishai, Sahai, and Wagner (CRYPTO'03), we present a general transformation that compiles any circuit into one that provably resists well-defined classes of side-channel leakage. Our transformation works as long as the side-channel leakage is computationally bounded (i.e., cannot compute a certain function), and we rely only on a minimal leakage-proof device: one that draws random elements from a simple distribution. We thus reduce the problem of shielding arbitrary complex circuits to the problem of shielding a single simple device.

In contrast to previous works, we allow the side-channel leakage to depend on the whole state and all the wires of the circuit, and to grow unbounded over time. Even though the side-channel leakage function is assumed to be computationally bounded, the observer receiving the leakage may be powerful and adaptive.

We prove a general composition theorem that shows how to build a provably secure circuit of arbitrary complexity out of components that satisfy a simple condition. Using circuit lower bounds, we then describe a specific transformation that is unconditionally secure against all leakage functions in the complexity class AC0.

Joint work with Sebastian Faust and Leo Reyzin.

Architectural Side Channels in Cloud Computing, Eran Tromer (MIT)

Today's computers typically run numerous processes of varying sensitivity and trustworthiness. The platform purports to separate these by means of mechanisms such as memory protection and virtualization. However, side channels arise from lower architectural layers: contention for shared resources, such as memory caches, creates inadvertent cross-talk between processes. These leakages can be exploited for stealing cryptographic keys and other sensitive information.

Two trends magnify and power and impact of these attacks: increasing parallelism (e.g., multicore and multithreading CPUs), and diversification of workload. The situation is especially worrisome where the two trends converge: cloud computing.

In this talk we will discuss architectural side channel attacks, and cache attacks in particular. We will then present an empirical study case on cloud computing. We show that it is possible to map the internal cloud infrastructure, identify where a particular target virtual machine is likely to reside, and instantiate new virtual machines that are co-resident with the target on the same physical machine. We then show that such co-residence allows attackers to exfiltrate information across virtual machine boundaries by use of side channels and covert channels.

Includes joint works with Dag Arne Osvik, Thomas Ristenpart, Stephan Savage, Adi Shamir and Hovav Shacham.

Position Based Cryptography, Nishanth Chandran (UCLA)

We consider what constitutes identities in cryptography. Typical examples include your name and your social-security number, or your fingerprint/iris-scan, or your address, or your (non-revoked) public-key coming from some trusted public-key infrastructure. In many situations, however, where you are defines your identity. For example, we know the role of a bank-teller behind a bullet-proof bank window not because she shows us her credentials but by merely knowing her location. In this paper, we initiate the study of cryptographic protocols where the identity (or other credentials and inputs) of a party are derived from its geographic location.

We start by considering the central task in this setting, i.e., securely verifying the position of a device. Despite much work in this area, we show that in the Vanilla (or standard) model, the above task (i.e., of secure positioning) is impossible to achieve. In light of the above impossibility result, we then turn to the Bounded Retrieval Model (a variant of the Bounded Storage Model) and formalize and construct information theoretically secure protocols for two fundamental tasks: secure positioning and position based key-exchange.

We then show that these tasks are in fact universal in this setting -- we show how we can use them to realize Secure Multi-Party Computation.

Our main contribution in this paper is threefold: to place the problem of secure positioning on a sound theoretical footing; to prove a strong impossibility result that simultaneously shows the insecurity of previous attempts at the problem; and to present positive results by showing that the bounded-retrieval framework is, in fact, one of the "right" frameworks (there may be others) to study the foundations of position-based cryptography.

This is joint work with Vipul Goyal, Ryan Moriarty and Rafail Ostrovsky and will appear at CRYPTO'09 .

On leakage-resilient pseudorandom functions, Krzysztof Pietrzak (CWI)

Recently Standaert et al. (eprint 2009/341) propose and motivate a restricted model of leakage-resilience where the leakage functions must be "non-adaptive". They show the GGM construction of a PRF from a PRG gives a PRF which is "non-adaptively" leakage-resilient if the PRG is modelled as a random oracle (RO) and moreover the leakage-functions are not allowed to query the RO.

In this talk I will show several constructions of PRFs which are "non-adaptively" leakage-resilient in the standard model.

Thin I will shortly discuss the possibility of constructing a fully leakage-resilient PRF. Standaert et al. argue that this is impossible, but this seems too pessimistic.

Leakage-Resilient Public-Key Cryptography in the Bounded Retrieval Model, Yevgeniy Dodis (NYU)

We study the design of cryptographic primitives resilient to key leakage attacks, where an attacker can repeatedly and adaptively learn information about the secret key, subject only to the constraint that the overall amount of such information is bounded by some parameter $\ell$. We construct a variety of leakage-resilient public-key systems including the first known identification schemes (ID), signature schemes and authenticated key agreement protocols (AKA). Our main result is an efficient three-round leakage-resilient AKA in the Random Oracle model. This protocol ensures that session-keys are private and authentic even if (1) the adversary leaks a large fraction of the long-term secret keys of both users prior to the protocol execution and (2) the adversary completely learns the long-term secret keys after the protocol execution. In particular, our AKA protocol provides qualitatively stronger privacy guarantees than leakage-resilient public-encryption schemes (constructed in prior and concurrent works), since such schemes necessarily become insecure if the adversary can perform leakage attacks after seing a ciphertext.

Moreover, our schemes can be flexibly extended to the Bounded-Retrieval Model, allowing us to tolerate very large absolute amount of adversarial leakage $\ell$ (potentially many gigabytes of information), only by increasing the size of the secret key and without any other loss of efficiency in communication or computation. Concretely, given any leakage parameter $\ell$, security parameter $\lambda$, and any desired fraction $0< \delta \leq 1$, our schemes have the following properties:

(1) Secret key size is $\ell(1+\delta)+O(\lambda)$. In particular, the attacker can learn an approximately $(1-\delta)$ fraction of the secret key.

(2) Public key size is $O(\lambda)$, and independent of $\ell$.

(3) Communication complexity is $O(\lambda/\delta)$, and independent of $\ell$.

(4) All computation reads at most $O(\lambda/\delta^2)$ locations of the secret key, independently of $\ell$.

Lastly, we show that our schemes allow for repeated ``invisible updates'' of the secret key, allowing us to tolerate up to $\ell$ bits of leakage in between any two updates, and an unlimited amount of leakage overall. These updates require that the parties can securely store a short ``master update key'' (e.g. on a separate secure device protected against leakage), which is only used for updates and not during protocol execution. The updates are invisible in the sense that a party can update its secret key at any point in time, without modifying the public key or notifying the other users.

Non-Interactive Verifiable Computation, Bryan Parno (CMU)

We consider the following problem. A "computationally weak" client C wants to compute a function F on various inputs x_1,...,x_k and delegates the computation to a "computationally powerful" server S. The client requests that S provides a proof that the value y_i=F(x_i) is correct, but the verification of such proof should take substantially less computational effort than computing F(x_i) from scratch. We present a non-interactive argument (i.e. secure against a polynomially bounded S) that y_i is indeed F(x_i) which can be verified in constant time. Our scheme also provides input privacy, i.e. C can provide S with an encrypted form of the input x_i over which to compute F. The scheme is based on Gentry's doubly-homomorphich encryption and does not use PCP techniques.

HAIL: A High-Availability and Integrity Layer for Cloud Storage, Alina Oprea (RSA)

HAIL (High-Availability and Integrity Layer) is a distributed cryptographic system that allows a set of servers to prove to a client that a file dispersed across servers is intact and available for retrieval. Proofs in HAIL are efficiently computable by servers and highly compact---typically tens or hundreds of bytes, irrespective of file size. HAIL cryptographically verifies and reactively reallocates file shares when corruption is detected. It is robust against an active, mobile adversary that may progressively corrupt the full set of servers. HAIL improves on the security and efficiency of existing tools, like Proofs of Retrievability (PORs) deployed on individual servers.

Lossy Encryption from General Assumptions, Brett Hemenway (UCLA)

We give general constructions of Lossy Encryption from rerandomizable encryption, and from statistical OT. Applying the recent results of Bellare, Hofheinz and Yilek, this gives efficient, general constructions of Selective Opening secure encryption from a wide variety of primitives, including Rerandomizable encryption, Homomorphic Encryption, PIR and statistical OT.

We extend the definition of Selective Opening Security, to Selective Opening Security against a Chosen Ciphertext attack in both the indistinguishability-based model, and give a construction that satisfies this definition based on Lossy-Trapdoor Functions.

We extend the definition of Selective Opening Security, to Selective Opening Security against a Chosen Ciphertext attack in both the simulation-based model, and give a construction that satisfies this definition based on statistical NIZKs.

Circular-Secure Encryption from Decision Diffie-Hellman, Shai Halevi (IBM)

We describe a public-key encryption system that remains secure even encrypting messages that depend on the secret keys in use. In particular, it remains secure under a "key cycle" usage, where we have a cycle of public/secret key-pairs $(\PK_i,\SK_i)$ for $i=1,\ldots,n$, and we encrypt each $\SK_i$ under $\PK_{(i \bmod n)+1}$. Such usage scenarios sometimes arise in key-management systems and in the context of anonymous credential systems. Also, security against key cycles plays a role when relating "axiomatic security" of protocols that use encryption to the "computational security" of concrete instantiations of these protocols.

The existence of encryption systems that are secure in the presence of key cycles was wide open until now: on the one hand we had no constructions that provably meet this notion of security (except by relying on the random-oracle heuristic); on the other hand we had no examples of secure encryption systems that become demonstrably insecure in the presence of key-cycles of length greater than one.

Here we construct an encryption system that is circular-secure against chosen-plaintext attacks under the Decision Diffie-Hellman assumption (without relying on random oracles). Our proof of security holds even if the adversary obtains an encryption clique, that is, encryptions of $\SK_i$ under $\PK_j$ for all $1 \leq i,j \leq n$. We also construct a circular counterexample: a one-way secure encryption scheme that breaks completely if an encryption cycle (of any size) is published.

Joint work with Dan Boneh, Mike Hamburg and Rafail Ostrovsky.

Patient Privacy in Electronic Medical Records, Melissa Chase (MSR)

Electronic medical records are one of the most visible, and arguably one of the most valuable applications of cloud storage, but also one in which privacy and security are extremely important. Here we will look at the problem of guaranteeing privacy and security in this setting, while still allowing for the necessary functionality. We propose an approach in which each patient encrypts his/her own health record before it is stored in the cloud. We will describe a straightforward solution based on standard crypto primitives and searchable encryption that allows for delegation of access rights and simple searches over the records. But there are many other issues that are not as easy to address, e.g. emergency response, key loss and key management, revocation of access rights, complying with privacy and auditing laws, hiding access patterns, efficiency concerns, etc. We note that many of the same issues also occur in other cloud storage scenarios, but here they are exacerbated by the stronger privacy requirements inherent in the health care setting and the stronger usability requirements necessary to make such a system accessible to the average patient. This talk will describe the setting and the basic solution, and then discuss some of the issues that arise. Joint work with Josh Benaloh, Kristin Lauter, and Eric Horvitz.

Private Data Structures, Seny Kamara (MSR)

We consider privacy-preserving data structures, focusing on the design of cryptographic schemes to encrypt, obfuscate and obliviously query arbitrary data types. We generalize previous work on searchable encryption, obfuscation and oblivious keyword protocols in several directions. First, we introduce formal security definitions for encrypted data structures, obfuscated data structures and oblivious query protocols, and we study the relationships between these notions. Unlike previous work, our definitions are general and are not restricted to any particular data type. We show that in many cases encrypted data structures can be used to construct both obfuscated data structures and oblivious query protocols. Finally we show some simple schemes for encrypting several data types including matrices and graphs supporting various queries. Joint work with Melissa Chase.

On Bounded Distance Decoding, Unique Shortest Vectors, and the Minimum Distance Problem, Vadim Lyubashevsky (UCSD)

We prove the equivalence, up to a small polynomial approximation factor $\sqrt{n/\log n}$, of the lattice problems $\uSVP$ (unique Shortest Vector Problem), $\BDD$ (Bounded Distance Decoding) and $\GapSVP$ (the decision version of the Shortest Vector Problem). This resolves a long-standing open problem about the relationship between uSVP and the more standard $\GapSVP$, as well the $\BDD$ problem commonly used in coding theory. The main cryptographic application of our work is the proof that the Ajtai-Dwork (\cite{Ajtai+Dwork-PublCrypwithEqui:97}) and the Regev (\cite{DBLP:journals/jacm/Regev04}) cryptosystems, which were previously only known to be based on the hardness of $\uSVP$, can be equivalently based on the hardness of worst-case GapSVP$_{O(n^{2.5})}$ and GapSVP$_{O(n^{2})}$, respectively. Also, in the case of $\uSVP$ and $\BDD$, our connection is very tight, establishing the equivalence (within a small constant approximation factor) between the two most central problems used in lattice based public key cryptography and coding theory. This is joint work with Daniele Micciancio.

Fast Cryptographic Primitives and Circular-Secure Encryption Based on Hard Learning Problems, Benny Applebaum (Princeton)

The well-studied task of *learning a linear function with errors* is a seemingly hard problem and the basis for several cryptographic schemes. Here we demonstrate additional applications that enjoy strong security properties and a high level of efficiency. Namely, we construct: 1. Public-key and symmetric-key cryptosystems that provide security for key-dependent messages and enjoy circular security. Our schemes are highly efficient: in both cases the ciphertext is only a constant factor larger than the plaintext, and the cost of encryption and decryption is only n*polylog(n) bit operations per message symbol in the public-key case, and polylog(n) bit operations in the symmetric case. 2. Two efficient pseudorandom objects: a ``weak randomized pseudorandom function'' --- a relaxation of standard PRF --- that can be computed obliviously via a simple protocol, and a length-doubling pseudorandom generator that can be computed by a circuit of $n \cdot \polylog(n)$ size. The complexity of our pseudorandom generator almost matches the complexity of the fastest known construction (Applebaum, Ishai and Kushilevitz, RANDOM 2006), which runs in linear time at the expense of relying on a nonstandard intractability assumption. Our constructions and security proofs are simple and natural, and involve new techniques that may be of independent interest. In addition, by combining our constructions with prior ones, we get fast implementations of several other primitives and protocols. This is joint work with David Cash, Chris Peikert and Amit Sahai. To appear in CRYPTO 2009.

Functional Encryption: An Overview and Survey, Brent Waters (UT Austin)

Functional encryption is a way of "rethinking" encryption, where a user will learn a function of the encrypted data and their private key. In a functional encryption system a user will be assigned (from an authority) a secret key, SK_y for a string 'y'. An encryption algorithm will encrypt data as a string x and output a ciphertext, CT. If a user with SK_y receives the ciphertext, it can apply the "decryption" algorithm to learn f(x,y).

In this talk, I will overview the concept of functional encryption and give a survey of recent results. I begin by showing how several recent encryption systems (e.g., IBE, Attribute-Based Encryption, Searching on Encrypted Data) can be unified as cases of functional encryption. In addition, I will discuss the main ideas and techniques behind existing constructions. Finally, I will describe four open research directions in the area and attempt to give insights into the main challenges in each direction.

Short and Stateless Signatures from the RSA Assumption
We present the first signature scheme which is ''short'', stateless and secure under the RSA assumption in the standard model. Prior short, standard model signatures in the RSA setting required either a strong complexity assumption such as Strong RSA or (recently) that the signer maintain state. A signature in our scheme is comprised of one element in ZN* and one integer. As a design strategy, we employ the known generic construction of fully-secure signatures from weakly-secure signatures and a chameleon hash. We then introduce a new proof technique for reasoning about weakly-secure signatures which enables the simulator to predict a prefix of the message on which the adversary will forge and to use knowledge of this prefix to embed the challenge. This technique has wider applications beyond RSA, which we will demonstrate and discuss.

On Recycling Encryption Schemes or Achieving Resistance to Cache Attacks via Low Bandwidth Encryption, Moni Naor (Weizmann)

Some of the most severe side-channel attacks proposed are those based on sharing of resources such as memory caches by processes with various privileges. For instance, Osvik, Shamir and Tromer showed how to mount a "cache attack" to break implementation of AES by creating a high bandwidth cross-talk between an AES process and an unprivileged process.

In this talk I will show how to defend against cache attacks by using an encryption scheme of Blaze, Feigenbaum and Naor, constructed for Remotely Keyed Encryption.

Solving Hidden Number Problem with One Bit Oracle and Advice, Adi Akavia (IAS)

In the Hidden Number Problem (HNP), the goal is to find a hidden number s, when given p, g and access to an oracle that on query "a" returns the k most significant bits of $s\cdot g^a mod p$.

We present an algorithm solving HNP, when given an advice depending only on p and g; the running time and advice length are polynomial in log p. This algorithm improves over prior HNP algorithms in achieving: (1) optimal number of bits $k\ge 1$ (compared with $k\ge\Omega(\log\log p)$); (2) robustness to random noise; and (3) handling a wide family of predicates on top of the most significant bit.

As a central tool we present an algorithm that, given oracle access to a function f over $Z_N$, outputs all the significant Fourier coefficients of f (i.e., those occupying, say, at least 1% of the energy). This algorithm improves over prior works in being: 1. Local: Its running time is polynomial in log N and $L_1(\hat f)$ (for $L_1(\hat f)$ the sum of f's Fourier coefficients, in absolute value). 2. Universal: For any N, t, the *same* oracle queries are asked for all functions f over $Z_N$ s.t. $L_1(\hat f)\le t$. 3. Robust: The algorithm succeeds with high probability even if the oracle to f is corrupted by random noise.


Joanne Talbot Hanley
Last modified: Tue Aug 4 15:15:23 EDT 2009