Principles of Software Security School
|Time||Room||Speaker and Title||Materials|
|Wednesday: (May 30th)|
|8:15am - 9am||Willard 367||Breakfast|
|9am -10:30am||Willard 367||Andrew Myers: Keynote|
|11am-12:30pm||Willard 367||Hovav Shacham: Return-oriented Programming|
|2pm-3:30pm||Willard 367||Hovav Shacham: Return-oriented Programming|
|4pm-5:30pm||IST 220||Hovav Shacham: Lab|
|Thursday: (May 31st)|
|8:15am - 9am||Willard 367||Breakfast|
|9am - 10:30am||Willard 367||Vinod Ganapathy: Retrofitting Legacy Code for Security||Slides|
|11am - 12:30pm||IST 220||Vinod Ganapathy: Lab||Results|
|2pm-3:30pm||Willard 367||Nikhil Swamy: Certified Software Security using Dependent Types||Slides|
|4pm-5:30pm||IST 220||Nikhil Swamy: Lab||F-Star|
|5:30pm||IST West Atrium (2nd Floor)||Poster Session||List of posters|
|Friday: (Jun 1st)|
|8:15am - 9am||Willard 367||Breakfast|
|9am-10:30am||Willard 367||Gang Tan: Binary-Level Software Security||Slides|
|11am-12:30pm||IST 220||Gang Tan: Lab|| Instructions. Contact
Dr. Tan for VM image.
|2-3:30pm||Willard 162||Phillip Rogaway (joint session): Provably Secure Shared-Key Encryption||Slides|
A fully homomorphic encryption scheme enables computation of arbitrary functions on encrypted data. Long regarded as cryptography's prized "holy grail", a concrete construction of such an encryption scheme remained elusive for over three decades. The last three years has witnessed a number of constructions of fully homomorphic encryption involving novel mathematical techniques based on integer lattices, as well as a number of exciting applications. I will give an overview of these developments and provide a glimpse of the research directions that lie ahead.
Vinod Vaikuntanathan is an assistant professor of Computer Science at the University of Toronto. In previous avatars, he was a Researcher at Microsoft Redmond and a Josef Raviv postdoctoral fellow at IBM T.J. Watson. He received his Ph.D. from MIT, where he was a recipient of a George M. Sprowls doctoral dissertation award.
One-way functions are the most basic, unstructured form of cryptographic hardness. Yet in a sequence of celebrated results (mostly in the eighties and early nineties), one-way functions were shown to imply a rich collection of cryptographic schemes and protocols (such as digital signatures and secret-key encryption). At the basis of this beautiful mathematical structure, are a few constructions of basic primitives: pseudorandom generators [Hastad-Impagliazzo-Levin-Luby '91], universal one-way hash functions [Naor-Yung '89, Rompel '90], and more recently statistically hiding commitments and statistical zero-knowledge arguments [Haitner-Nguyen-Ong-Reingold-Vadhan '06 & '07]. In all three cases, turning raw hardness into a much more exploitable cryptographic object requires some very elaborate constructions and proofs.
In this talk we will try to hint on how one-way functions naturally contain a basic form of each of these objects. The talk will be influenced by a recent line of results, simplifying and improving all of these constructions. The crux of each new construction is defining the "right" notion of computational entropy and recovering this form of entropy from one-way functions.
Based on several works with (subsets of) Iftach Haitner, Thomas Holenstein, Salil Vadhan and Hoteck Wee.
Omer Reingold is a Principal Researcher at Microsoft Research Silicon Valley. His research is in Foundations of Computer Science, mainly in Computational Complexity and the Foundations of Cryptography. Much of his research deals with Randomness, Derandomization and Explicit Combinatorial Constructions.
In 2004 he assumed a faculty position in the computer science department of The Weizmann Institute of Science. During the years 1999-2004, he was a member of AT&T Labs, Florham Park, NJ, and a visiting member of the School of Mathematics, Institute for Advanced Study, Princeton, NJ. Earlier, he completed his PhD and a short period of postdoctoral studies at the Weizmann Institute. His PhD advisor was Moni Naor. During the years 1991-1994, he completed his undergraduate studies at Tel-Aviv University.
The Internet enables concurrent executions of cryptographic protocols. This concurrent setting, however, also brings forth new types of coordinated attacks in which an adversary controls many parties, interleaving the executions of the various protocol instances, and attempts to "maul" messages from one execution to use in another.
In this talk, I will survey some recent methods for achieving concurrent security without relying on any trusted-set up (such as e.g., Common Reference Strings). A central component in these constructions is the notion of a non-malleable commitment, and the even stronger notion of a CCA-secure commitment.
I will also illustrate how methods developed in the context of concurrent security may be used to demonstrate strong (black-box) separations in cryptography.
Rafael Pass is an Assistant Professor in the Department of Computer Science at Cornell University. He obtained my Ph.D in 2006 in the Theory of Computation group at MIT, with Silvio Micali as advisor. Previously, he completed my Licentiate Thesis (M.S.) under the supervision of Johan Hastad. His research focuses on cryptography and its interplay with computational complexity and game theory.
When accessing an online service provider, a user must present evidence that she is authorized to do so. For example, she may be authorized to participate in an on-line game once a day if she has a license to play. On the other hand, users are entitled to privacy, and should we require them to disclose their identities and show their credentials in the clear, their privacy is jeopardized: if a service provider's records are somehow leaked, or aggregated together with other service providers' records, a record of the user's activities will get exposed to the world, violating her privacy. It turns out that the two requirements --- the service provider's need to verify that the user is authorized and the user's need to protect her privacy --- do not contradict each other! What is needed here is an anonymous credential system that would allow a user to prove that she is authorized without revealing her identity, and, further, to obtain additional credentials without revealing additional information. It is also desirable to be able to impose a limit on how many times a credential may be shown, and in what context, and what happens should a user attempt to use an anonymous credential more times than authorized. In this talk, I will explain the cryptographic tools that allow us to construct such anonymous credential systems.
Anna Lysyanskaya is an Associate Professor of Computer Science at Brown University. She received her Ph.D. in Computer Science and Electrical Engineering from MIT and joined the faculty of Brown in 2002. She is a recipient of an NSF CAREER award and a Sloan Foundation fellowship; she is also an MIT Technology Review Magazine's "35 under 35" honoree. Her research interests are in cryptography, theoretical computer science, and computer security. A theme of her research is on balancing anonymity with accountability.
Once a tool only for making quite theoretical claims in cryptography, reductions have become the central tool for the design and analysis of practical cryptographic schemes for intermediate-level primitives. This tutorial will explore the reduction-based treatment for blockcipher modes of operation, especially schemes for symmetric (that is, shared-key) encryption. We will explore a variety of formalizations for symmetric encryption, as well as means for provably achieving them, always starting from the assumed existence of a secure blockcipher. While much of what I describe will be fairly classical, I will also include some newer topics, like the FFX scheme for format-preserving encryption and the authenticated-encryption mode OCB3.
Phil Rogaway is a cryptographer at the University of California, Davis. He did his undergraduate at UC Berkeley and his Ph.D. at MIT. Rogaway next worked at IBM as a Security Architect, where he became interested in the problem of developing foundations for cryptography that would be useful, and used, for cryptographic practice. In a body of work done in large part with Mihir Bellare, Rogaway co-developed what has been called practice-oriented provable security.” Standardized cryptographic schemes that Rogaway co-invented include CMAC, DHIES, EME, FFX, OAEP, OCB, PSS, UMAC, and XTS. Rogaway is also interested in social and ethical issues surrounding technology, and regularly teaches a class on the subject.
Steve Myers and abhi shelat
Goldwasser and Micali developed the notion of semantic security, and in the process developed the first definition of public-key encryption security that was viewed as sufficient from some applications of modern cryptography, in particular security against passive adversaries. However, it was quickly understood that the definition was insufficient for active adversaries, in a number of settings. The notion of Chosen Ciphertext Attack security was developed by Naor and Yung, and extended by Simon and Rackoff to address some issues with adaptive adversaries. Independently, Dolev, Dwork and Naor considered the notion of malleability as it applied to all cryptographic primitives. Bellare and Rogaway introduced the notion of Plaintext Awareness, developing an even stronger security guarantee that Chosen Ciphertext Attack security.
In the last twenty years it has become evident that an even larger variety of security properties are required in different situations, from the key-dependent or circular security necessary for Gentry's FHE schemes, to deterministic security used for fast searching on encrypted data.
Despite the fundamental nature of public-key encryption, there is a shocking lack of understanding in how many of these basic notions of security relate to each other, with perhaps the largest open question being whether or not semantic security implies strong forms of chosen cipher text attack security. In our presentation and discussion we will cover known basic relationships between different security notions, approaches used to achieve these security guarantees, and known black-box separations.
Steven Myers is an Associate Professor in the School of Informatics an Computing at Indiana University, where he is also a member of the Center for Applied Cybersecurity. His research interests are in all areas of cryptography, and some areas of computer and systems security. He has written papers, led panels, and given invited talks in fields ranging from Cryptography and Computer Security to Distributed Systems and Probabilistic Combinatorics. Recently he co-edited the book 'Phishing & Countermeasures: Understanding the Increasing Problem of Electronic Identity Theft' with Markus Jakobsson (Wiley Press, 2007).
abhi shelat is an assistant professor of Computer Science at the University of Virginia. He was once a young researcher at the IBM Zurich Research Lab after receiving his PhD from MIT. His research interests include encryption and two-party secure computation.
This lecture focuses on inlined reference monitors (IRM) at the binary-code level. The IRM approach rewrites untrusted binary code so that a reference monitor is inlined into the code to enforce a security property. We will review typical IRMs, including Software-based Fault Isolation (SFI) and Control-Flow Integrity (CFI). We will then learn optimizations and implementation strategies that make IRMs efficient and portable. Finally, we will discuss recent efforts in making IRMs trustworthy by formally proving the correctness of IRM verifiers. During the lab session, students will implement an IRM inside the LLVM compiler.
Dr. Gang Tan is an assistant professor of Computer Science and Engineering and leads the Security of Software (SOS) Lab at Lehigh University, PA. He received his bachelor's degree from Tsinghua University in 1999 and his Ph.D. degree from Princeton University in 2005. Honors include an NSF CAREER award and a Francis Upton Graduate Fellowship. His research is at the interface between computer security, programming languages, and software engineering. He is especially interested in software security, investigating methodologies that help create reliable and secure software systems.
F* is a verification-oriented programming language developed at Microsoft Research. It follows in the tradition of the ML family of languages in that it is a typed, strict, higher-order programming language. However, its type system is significantly richer than ML's, allowing functional correctness specifications to be stated and checked (semi-)automatically using an automated theorem prover. You can try out programming in F* at http://research.microsoft.com/fstar.
I will present an overview of the F* project, covering the following topics: (1) a brief tutorial on dependent type systems; (2) the core language design of F*; (3) using F* to verify the security of web browser extensions; (4) F* applied to the certification of the F* typechecker itself. I'll also teach a hands-on lab that will involve writing and verifying several small programs using F*.
I am a Researcher in the RiSE group at Microsoft Research, Redmond, having previously obtained my Ph.D. in 2008 from the University of Maryland, College Park. My work covers various topics related to programming languages, including type systems, program logics, functional programming, type-preserving compilation, and interactive theorem proving. I often think about how to use these techniques to build provably secure programs, including compilers, web applications, web browsers, crypto protocol implementations, and low-level systems code. You can find out more about my work from http://research.microsoft.com/~nswamy.
Recent work has reinforced the danger of timing channels, by showing that the timing of network packets can be used to learn private information including private keys, and that coresident programs on the same cloud computing node can quickly learn cryptographic keys by timing cache probes. It is hard to show that a sufficiently clever adversary cannot analyze timing measurements to learn about any information that influences timing.
Our research group has been making progress on this long-standing problem. The key idea is to delay the actions observable to the adversary so that the times at which they occur conform to predictions generated by a algorithm. If this algorithm does not use sensitive information, adversaries learn very little -- they could have made the predictions themselves. Using such a mechanism, we can derive bounds on timing leakage, achieving even asymptotically sublinear leakage under reasonable assumptions. When applied to web applications, predictive mitigation appears to be effective in practice.
A harder problem is preventing timing leakage to a coresident attacker. How can programmers know when their programs are leaking information through timing, given that language abstractions intentionally hide the low-level features that create timing channels? To enable static reasoning about timing channels, we formalize some assumptions about the underlying language implementation. These assumptions guide the design of both secure hardware and secure programs. Though these assumptions can be satisfied using stock hardware (at a performance penalty), we model custom hardware whose design was guided by these assumptions, and show that it can control timing channels in real programs.
Joint work with Danfeng Zhang and Aslan Askarov.
Andrew Myers is a Professor in the Computer Science Department at Cornell University in Ithaca, NY. He received his Ph.D. in Electrical Engineering and Computer Science from MIT in 1999.
His research interests include computer security, programming languages, and distributed and persistent objects. His work on computer security has focused on practical, sound, expressive languages and systems for enforcing information security. The Jif programming language makes it possible to write programs which the compiler ensures are secure. The Polyglot extensible compiler framework is now widely used for programming language research.
Myers is the recipient of an Alfred P. Sloan Research Fellowship, a George M. Sprowls award for outstanding MIT Ph.D. thesis, the Cornell Provost's Award for Distinguished Scholarship, the SIGPLAN 2009 Most Influential POPL Paper award for a paper in POPL 1999, and the Best Paper Award for papers in SOSP 2001 and SOSP 2007.
Myers is currently on the editorial boards of ACM Transactions on Computer Systems and the Journal of Computer Security. He has served on the editorial board of ACM Transactions on Information and System Security. He has served on more than 30 other conference program committees, and has chaired those of the IEEE Security and Privacy Symposium and the IEEE Symposium on Computer Security Foundations.
Research in computer security has historically advocated Design for Security, the principle that security must be proactively integrated into the design of a system. While examples exist in the research literature of systems that have been designed for security, there are few examples of such systems deployed in the real world. Economic and practical considerations force developers to abandon security and focus instead on functionality and performance, which are more tangible than security. As a result, large bodies of legacy code often have inadequate security mechanisms. Security mechanisms are added to legacy code on-demand using ad hoc and manual techniques, and the resulting systems are often insecure.
In this talk, I will present my past research and recent advances in program analysis techniques to retrofit software with security mechanisms. I will discuss work on retrofitting legacy software with authorization policy enforcement mechanisms (i.e., reference monitors) as well as work on weaving decentralized information flow control (DIFC) checks in legacy code.
Vinod Ganapathy is an Assistant Professor of Computer Science at Rutgers University. He obtained his Ph.D. in computer science in 2007 from the University of Wisconsin-Madison. His research focuses on software and operating system security, security and privacy of mobile devices, and Web application and browser security.
We describe return-oriented programming, a generalization of return-into-libc that allows an attacker to undertake arbitrary, Turing-complete computation without injecting code.
New computations are constructed by linking together code snippets that end with a "ret" instruction. The ret instructions allow an attacker who controls the stack to chain instruction sequences together. Because the executed code is stored in memory marked executable, W^X and DEP will not prevent it from running.
W^X and DEP, along with many other security systems, make the assumption that preventing the introduction of malicious code is sufficient to prevent the introduction of malcious computation. With the return-oriented computing approach, this assumption is false: subverting control flow on the stack is sufficient to construct arbitrary computation from "known-good" code.
On the x86 one can obtain useful instruction sequences by jumping into the middle of intended instructions, but return-oriented programming is possible even on RISC platforms that are very different from the x86.
We describe how instruction sequence finding, gadget finding, and exploit payload compilation can be automated. We argue that return-oriented programming is a usable, powerful (Turing-complete), generally applicable threat to systems assumed to be protected code-injection defenses.
Hovav Shacham joined UC San Diego's Department of Computer Science and Engineering in Fall 2007.
Shacham received his Ph.D. in computer science in 2005 from Stanford University, where he had also earned, in 2000, an A.B. in English. His Ph.D. advisor was Dan Boneh. In 2006 and 2007, he was a Koshland Scholars Program postdoctoral fellow at the Weizmann Institute of Science, hosted by Moni Naor.
Shacham's research interests are in applied cryptography, systems security, and tech policy.
In 2007, Shacham participated in California Secretary of State Debra Bowen's "Top-to-Bottom" review of the voting machines certified for use in California. He was a member of the team reviewing Hart InterCivic source code; the report he co-authored was cited by the Secretary in her decision to withdraw approval from Hart voting machines.