No abstract available.
Monday, July 6th, 2015
No abstract available.
In 1996, Ajtai proved a remarkable connection between the average-case and worst-case complexity of lattice problems, which opened the door to the use of lattices in the design of secure cryptographic functions. That's also the year I started learning about cryptography, and made lattice cryptography my main research focus for the following 20 years. Today lattice cryptography is a thriving research area, providing one of the most attractive "post-quantum" alternatives to mainstream number theoretic cryptography, and offering powerful tools to attack the most challenging cryptographic problems like fully homomorphic encryption and program obfuscation. In this talk, I will go over the evolution of lattice cryptography, from an area of theoretical cryptography at the intersection with computational complexity, to the mature research area it is today.
Tuesday, July 7th, 2015
We give $2^{n+o(n)}$-time algorithms for the two most central computational lattice problems, SVP and CVP, improving on the $4^{n+o(n)}$-time algorithm of Micciancio and Voulgaris. Our primary technical tool is an algorithm for sampling from the discrete Gaussian distribution, based on the elementary yet powerful observation that vectors from the discrete Gaussian distribution can be carefully combined to obtain exact samples from a narrower discrete Gaussian. The SVP algorithm is nearly immediate after this observation. The CVP algorithm requires more work, including analysis of the "structure" of approximate closest vectors.
No abstract available.
No abstract available.
We investigate the security of Diffie-Hellman key exchange as used in popular Internet protocols and find it to be less secure than widely believed. With the number field sieve algorithms, computing a single discrete log in prime fields is more difficult than factoring an RSA modulus of the same size. However, an adversary who performs a large precomputation for a prime p can then quickly calculate arbitrary discrete logs in groups modulo that prime, amortizing the cost over all targets that share this parameter. The algorithm can be tuned to reduce individual log cost even further. Although this fact is well known among mathematical cryptographers, it seems to have been lost among practitioners deploying cryptosystems. Using these observations, we implement a new attack on TLS in which a man-in-the-middle can downgrade a connection to 512-bit export-grade cryptography. In the 1024-bit case, we estimate that discrete log computations are plausible given nation-state resources, and a close reading of published NSA leaks shows that the agency’s attacks on VPNs are consistent with having achieved such a break.
(Joint work with David Adrian, Karthikeyan Bhargavan, J. Alex Halderman, Benjamin VanderSloot, Eric Wustrow, Zakir Durumeric, Pierrick Gaudry, Matthew Green, Drew Springall, Emmanuel Thomé, Luke Valenta, Santiago Zanella-Béguelin, and Paul Zimmermann.)
Wednesday, July 8th, 2015
Time-lock puzzles, introduced by May, Rivest, Shamir and Wagner, is a mechanism for sending messages “to the future”. A sender can quickly generate a puzzle with a solution s that remains hidden until a moderately large amount of time t has elapsed. The solution s should be hidden from any adversary that runs in time significantly less than t, including resourceful parallel adversaries with polynomially many processors. While the notion of time-lock puzzles has been around for 22 years, there has only been a single candidate proposed. Fifteen years ago, Rivest, Shamir and Wagner suggested a beautiful candidate time-lock puzzle based on the assumption that exponentiation modulo an RSA integer is an “inherently sequential” computation.
Based on joint work with Shafi Goldwasser, Abhishek Jain, Omer Paneth, Vinod Vaikuntanathan, and Brent Waters
We consider randomized encodings (RE) that enable encoding a Turing machine P and input x into its “randomized encoding” \hat{P(x)} in sublinear, or even polylogarithmic, time in the running-time of P(x), independent of its output length. We refer to the former as sublinear RE and the latter as compact RE. For such efficient RE, the standard simulation-based notion of security is impossible, and we thus consider a weaker (distributional) indistinguishability-based notion of security: Roughly speaking, we require indistinguishability of \hat{P0(x0)} and \hat{P0(x1)} as long as P0, x0 and P1, x1 are sampled from some distributions such that P0(x0), Time(P0(x0)) and P1(x1), Time(P1(x1)) are indistinguishable.
We first observe that compact RE is equivalent to a variant of the notion of indistinguishability obfuscation (iO)—which we refer to as puncturable iO—for the class of Turing machines without inputs. For the case of circuits, puncturable iO and iO are equivalent (and this fact is implicitly used in the powerful “punctured program” paradigm by Sahai and Waters [SW14]).
We next show the following:
1. Impossibility in the Plain Model: Assuming the existence of one-way functions, subexponentially-secure sublinear RE does not exists. (If additionally assuming subexponentially-secure iO for circuits we can also rule out polynomially-secure sublinear RE.) As a consequence, we rule out also puncturable iO for Turing machines (even those without inputs).
2. Feasibility in the CRS model and Applications to iO for circuits: Subexponentially-
3. Applications to iO for Unbounded-input Turing machines: Subexponentially-
We propose a new efficient RKA-secure KEM scheme based on the DBDH assumption. Our scheme is secure against the function class that contains polynomial functions of (bounded) polynomial degrees and the XOR functions simultaneously.
Abstract. In general, lattice-based cryptographic primitives offer very good performance and allow for strong security reductions. However, today’s most efficient lattice-based signature schemes sacrifice security to achieve good performance. Firstly, security is based on ideal lattice problems, which potentially are weaker than standard lattice problems. Secondly, the respective security reductions are loose; hence, their choices of parameters offer security merely heuristically. We bridge this gap by proving the lattice-based signature scheme TESLA to be tightly secure based on the learning with errors problem over standard lattices in the random oracle model. As such, we improve the original proposal by Bai and Galbraith (CT-RSA’14) twofold; we enhance the security by both tightening the reduction proof and minimizing the underlying security assumptions. Remarkably, we even greatly improve TESLA’s performance while providing stronger security.
We show how the Blum-Kalai-Wasserman algorithm for Learning With Errors can be modified to require sub-exponential time when the secret is small. This gives the first subexponential algorithm for several other problems, such as subset-sum, NTRU and new lattice problems.
Enumeration algorithms are the fastest known SVP algorithms in the class of polynomial space algorithms and the go-to algorithms to solve SVP in practice. For this reason it is important to study their practical and asymptotic complexity for cryptanalytic purposes. Both, the practical and asymptotic complexity depend heavily on the preprocessing performed before the main enumeration step and this is where most algorithms differ. In this talk I will discuss recent results on the trade off between preprocessing and enumeration.
Thursday, July 9th, 2015
Non-malleable (NM) codes provide a useful and meaningful security guarantee in situations where traditional error-correction (and even error-detection) is impossible; for example, when the attacker can completely overwrite the encoded message. Informally, a code is non-malleable if the message contained in a modified codeword is either the original message, or a completely unrelated value. Although such codes do not exist if the family of tampering functions F is completely unrestricted, they are known to exist for many broad tampering families F. One such natural family is the family of tampering functions in the so called split-state model. Here the message m is encoded into two shares, and the attacker is allowed to arbitrarily tamper with each share individually.
We give several techniques for building NM codes, focusing on the notion of a *non-malleable reduction*, which allows one to gradually reduce the complexity of the tampering family F until the desired NM code can be built directly. We illustrate this technique by giving several constructions of efficient, multi-bit, information-theoretically-
No abstract available.
No abstract available.
Friday, July 10th, 2015
Graded multilinear encodings have found extensive applications in cryptography ranging from non-interactive key exchange protocols, to broadcast and attribute-based encryption, and even to software obfuscation. Despite seemingly unlimited applicability, essentially only two candidate constructions were known before this work (GGH13 and CLT13). In this work, we describe a new "graph-induced" multilinear encoding scheme from lattices.
In a graph-induced multilinear encoding scheme the arithmetic operations that are allowed are restricted through an explicitly defined directed graph (somewhat similar to the "asymmetric variant" of previous schemes). Our construction encodes instances of Learning With Errors (LWE) using short square matrices of higher dimensions. Addition and multiplication of the encodings corresponds naturally to addition and multiplication of the LWE secrets. Security of the new scheme is not known to follow from LWE hardness (or any other "nice" assumption), at present it requires making new hardness assumptions.
No abstract available.