Last week, Dan Boneh and I hosted a security workshop with a mix of thought leaders from both academia and industry. Dan is a well-known Stanford professor of Computer Science who specializes in security and cryptography. At this workshop we brought together researchers and practitioners working on web application security. The discussions were about recent trends in secure web application design, common vulnerabilities in existing systems, and upcoming security architectures for the web.
One of the recurring problems discussed during the workshop is the overwhelming number of false positives that results in triage fatigue among security operators.
For example, suppose you have a traditional web application firewall (WAF) which tries to distinguish a “good” request from a “bad” request based solely on the requests received by a web server. Trying to determine which of the two requests is legitimate and which may be an attack is close to impossible. In this specific case, you may be able to determine that the “Referer” header is misspelled (or rather, correctly spelled) and thus was not generated by a legitimate browser and is indicative of an attack.
The problem I focused on during my talk was the false positives from security tools that focus just on the web server, or just on the database, or just on the client. Since they lack the complete context to classify a request, they are forced to use heuristics. No matter how good those heuristics are, either they are overly conservative and miss vulnerable requests, or they're overly aggressive and generate the false positives which developers and ops hate so much. The approach that we have been experimenting with leverages DOM virtualization to mitigate this problem by augmenting the server-side WAF with client-side information to make the detection more precise and less prone to false positives.
Kunal Anand, Co-Founder & CTO of Prevoty, tackled the context problem from a different direction and introduced the participants to Language-based Security. His approach took advantage of building parsers for different types of data to enforce security at the application level rather than browser level to increase the amount of context, and as a result reduce false positives.
Dan Boneh focused on a system called Stickler that he and his team have built to let end users verify the end-to-end authenticity of web content served while still being able to reap the benefits of caching CDNs give them. The research explores what kind of integrity guarantees can be made without modifying the browser.
Deian Stefan, Assistant Professor of CSE, UC San Diego, other the other hand, explored the security properties which modifying or augmenting browsers may give us. He described the security extensions to the web specification the W3C is working on. He described new standards like HTTP Strict Transport Security to enforce HTTPS, Content Security Policy for controlling content on the page, and Confinement System for the Web for label-based confinement on the web. These proposals are in different stages of standardization and promise to make capturing browser state and enforcing security easier and more expressive.
Parisa Tabriz, Security Princess at Google Chrome, made an extended metaphor comparing human health with Chrome health. One of the things I loved about her analogy was the opportunity to reflect on the many different proxies we use to evaluate the quality of a software project – from hard metrics to symptoms more akin to "aches, pains and a general feeling of malaise."
There were three more reflective talks on how security threats have evolved and how companies have evolved in response:
Michael Stoppelman, SVP of Engineering of Yelp, spoke about the many challenges Yelp took on as it grew – from being reactive to proactive in tackling everything from XSS to denial of service. A recurring theme during the day that Michael was the first to cover was the challenges in creating a security team that balances “builders” vs. “breakers.”
Upendra Mardikar, VP of Security Strategy Architecture and Engineering at American Express, gave a deep look into how web applications have changed in the financial industry and the impact and value that compliance programs have on securing financial systems.
Neil Daswani, CISO of Lifelock, took a step back from compliance and gave a great overview of how metrics can help give a sense of an organization’s security posture. While there are pitfalls in the metrics that we have today, he argued that an essential part of securing an organization is to track progress and improvement over time – something which metrics are able to provide.
One of the more provocative talks at the workshop was given by my colleague, Parvez Ahammad, who is Instart Logic's Head of Data Science and Machine Learning. He gave a detailed history of machine learning and security and in particular tackled the skepticism that many security experts have about it (shared by several members of the panel discussions). There are many different types of machine learning systems which come into and out of vogue. Overall, though, machine learning algorithms follow the “No Free Lunch” theorem, which states that there is no one model that works best for every problem. Parvez argued that many of the failures that machine learning algorithms have suffered in security are rooted in the idea that machine learning can be treated as a black box into which data is directed, only to have security or anomalies magically emerge.
There were also two different, very lively panel discussions - one led by Michael Abbott, General Partner, Kleiner Perkins Caufield & Byers with Ganesh Krishnan, Head of Security and Identity, Atlassian, Gene Golovinsky, Director Security R&D at Intuit and Diogo Mónica who is the Security Lead at Docker; and another with Collin Greene,Senior Security Engineering Manager at Uber, Hemant Raju, Director of Engineering Application Security & Security Architecture at Walmart and Bryan Payne, Engineering Manager of Product and Application Security at Netflix. The panelists covered everything from choosing good security vendors, to the shortage of good security developers, to the biggest mistakes and the most exciting future developments in the field. I cannot do justice capturing the information nuggets, one-liners and the repartee that was shared. This is part of the reason why we will be sharing videos of those panels and all the other talks over the next few weeks!
Overall, it was fantastic to have such a diverse group of security enthusiasts as participants, in the panels and in the audience. I’m especially thankful to Stanford and to Dan Boneh for giving us the chance to host a thought provoking and stimulating workshop. If you missed it, you can view the on-demand workshop sessions now.