How the Twitter product security team does automation and where we're going. All tools in the presentation were built on open source technology and will be open sourced over time.
28. Put your robots to work!
Code Run static
committed analysis tools
Gather Issue
reports notifications
Run dynamic
tools
#appsecusa #sadb
@nilematotle | @alsmola | @presidentbeef
29. Put your robots to work!
Code Run static
committed analysis tools
Automate dumb work
Gather Issue
reports notifications
Run dynamic
tools
#appsecusa #sadb
@nilematotle | @alsmola | @presidentbeef
30. After automation
#appsecusa #sadb
@nilematotle | @alsmola | @presidentbeef
31. Jenkins CI
#appsecusa #sadb
@nilematotle | @alsmola | @presidentbeef
37. Brakeman can run anytime
Write Run Commit Push to Code QA Deploy
Code Tests Code CI Review Code
#appsecusa #sadb
@nilematotle | @alsmola | @presidentbeef
38. Brakeman can run anytime
Write Run Commit Push to Code QA Deploy
Code Tests Code CI Review Code
#appsecusa #sadb
@nilematotle | @alsmola | @presidentbeef
39. Brakeman can run anytime
Write Run Commit Push to Code QA Deploy
Code Tests Code CI Review Code
#appsecusa #sadb
@nilematotle | @alsmola | @presidentbeef
40. Brakeman can run anytime
Write Run Commit Push to Code QA Deploy
Code Tests Code CI Review Code
#appsecusa #sadb
@nilematotle | @alsmola | @presidentbeef
41. Brakeman can run anytime
Write Run Commit Push to Code QA Deploy
Code Tests Code CI Review Code
#appsecusa #sadb
@nilematotle | @alsmola | @presidentbeef
42. Brakeman can run anytime
Write Run Commit Push to Code QA Deploy
Code Tests Code CI Review Code
#appsecusa #sadb
@nilematotle | @alsmola | @presidentbeef
43. Brakeman can run anytime
Write Run Commit Push to Code QA Deploy
Code Tests Code CI Review Code
#appsecusa #sadb
@nilematotle | @alsmola | @presidentbeef
44. Brakeman can run anytime
Write Run Commit Push to Code QA Deploy
Code Tests Code CI Review Code
Save
Code
#appsecusa #sadb
@nilematotle | @alsmola | @presidentbeef
45. Brakeman can run anytime
Find bugs as quickly as
Write Run Commit Push to
possible Code QA Deploy
Code Tests Code CI Review Code
Save
Code
#appsecusa #sadb
@nilematotle | @alsmola | @presidentbeef
51. Mesos +
Brakeman
Code
Repository SADB
Send
Email et the right information to
G
the right people
Developer
#appsecusa #sadb
@nilematotle | @alsmola | @presidentbeef
67. What does it look for?
#appsecusa #sadb
@nilematotle | @alsmola | @presidentbeef
68. What does it look for?
Mixed-content
#appsecusa #sadb
@nilematotle | @alsmola | @presidentbeef
69. What does it look for?
Mixed-content
Sensitive forms posting over HTTP
#appsecusa #sadb
@nilematotle | @alsmola | @presidentbeef
70. What does it look for?
Mixed-content
Sensitive forms posting over HTTP
Old, vulnerable versions of jQuery
#appsecusa #sadb
@nilematotle | @alsmola | @presidentbeef
71. What does it look for?
Mixed-content
Sensitive forms posting over HTTP
Old, vulnerable versions of jQuery
Forms without authenticity tokens
#appsecusa #sadb
@nilematotle | @alsmola | @presidentbeef
95. Our journey thus far
Manual tasks Automated tasks
Low visibility Trends and reports
Late problem discovery Automatic notifications
#appsecusa #sadb
@nilematotle | @alsmola | @presidentbeef
96. Tools in this presentation
#appsecusa #sadb
@nilematotle | @alsmola | @presidentbeef
Notes de l'éditeur
Hello AppSec USA. My name is Alex Smolen, this is Neil Matatall and this is Justin Collins. We're on Twitter's Product Security team and today we're going to talk to you about security automation at Twitter.\n
We want to talk about the future, and the direction that we're taking as a team to solve tomorrow's application security challenges. We're going to show some cool tech we've been working on, and talk about what we do, what we don't do, and why.\n
But before we do that, I want to walk through a little bit of Twitter's seven year history.\n
Twitter isn't a particularly old company, but it is a company that's changed a lot. This is our first logo from 2006.\n
Twitter isn't a particularly old company, but it is a company that's changed a lot. This is our first logo from 2006.\n
Twitter isn't a particularly old company, but it is a company that's changed a lot. This is our first logo from 2006.\n
Twitter isn't a particularly old company, but it is a company that's changed a lot. This is our first logo from 2006.\n
Twitter isn't a particularly old company, but it is a company that's changed a lot. This is our first logo from 2006.\n
Twitter isn't a particularly old company, but it is a company that's changed a lot. This is our first logo from 2006.\n
Twitter grew up very quickly and publicly, and had a lot of infrastructure challenges. [Fail Whale?]\n
One of those challenges was security. Of the several high profile account compromises at Twitter, this is perhaps the most notorious, where an attacker was able to compromise the president's Twitter account through an exposed administrative interface. This one got some very serious recognition.\n
From our pals at the FTC, who formally ordered an effective information security program at Twitter for the next 20 years.\n
We all joined the team after the FTC order. Our first challenge was dealing with a large and rapidly changing code base that was under constant attack. A simple XSS attack could lead to an XSS worm, which was a big problem from the FTC's perspective, and from ours.\n
With the help of whitehats, we tracked down and fixed a lot of these bugs.\n
Whereas working security at Twitter used to involve a lot of emergencies, we've reached a point where deployed vulnerabilities are much rarer, and it's given us an opportunity to think about what we should be doing. As part of a growing engineering team and a proliferating code base, we've started to think more strategically about how to be more efficient. So during our last hack week, our team built the 1.0 or our security automation framework.\n
Before we started coding, we've wanted to be able to describe a sort of worldview that we have, as a team, around security tools and automation, and use that drive what we built. As this audience a probably know, there's a lot of tools, methodologies, and activities related to application security. Our philosophy, and the tools we've built or integrated to support it, can really be distilled to a few principles.\n
The first is that we believe writing secure code is not just a technical challenge, but also a social one, and tools should be built based on supporting and enhancing existing social processes. Unless it's one person writing, analyzing, and shipping code, then communicating about vulnerabilities is just as a important as finding them. And effective communication is really hard. We're not talking about emailing a huge report of maybe bugs to a project manager. We're talking about delivering all of the necessary information to diagnose and fix a vulnerability in a simple and user-centered view.\n
The next principle is about finding and fixing things as quickly as possible. It's not a new idea, but as a guiding principle it leads you to be ruthless about bottlenecks, latencies, and root causes.\n
For a while, we were dealing with the same types of bug over and over and over. Once, while on call, I had a group of people decide to get themselves on the whitehat page by finding XSS in all of the sites of companies we had acquired. Let's just say I didn't get a lot of sleep that weekend. We've now introduced much more comprehensive security reviews for acquired companies. In our experience, the best predictor of the next bug is the last bug. So that's where we focus our effort.\n
There's a lot of ways to find security problems, and you get diminishing returns from each. We have tools that live on our servers, tools that live outside our server, tools that live in our users browsers, all meant to catch different types of issues.\n
Security automation results aren't entirely accurate. We want the fantastic engineers we work with to trust us, and so we want to make sure that they have a voice in the process.\n
Most people want to do the right thing. We want to make it easy for them.\n
We shouldn't be doing anything that doesn't require creativity or judgment.\n
While we've had some success with third party analysis and management tools, we've found that it's typically better to build our own. We know what we need to look for, and we know how our organization works. By doing only the things that are applicable to our technology, culture, and workflow, we waste less time overall.\n
So we try to follow these philosophies when we approach using and implementing tools. Automating security does not just mean using automated tools for specific tasks.\n
We have these manual tasks we need to perform as part of our security program. We need to review code as it is developed. We need to do penetration testing by poking around on our websites. And then we rely on whitehats to find problems and hopefully report them to us, rather than letting the world know.\nMany of our security tasks can be partially replaced with automated tools.\n
For example, we can use static analysis to check for common coding problems, dynamic analysis for obvious problems on websites, and maybe CSP to get XSS reports to us sooner\n
But the workflow is still manual! Someone from the security team runs the tools, waits for results, then needs to determine the validity of reports, and then work to get fixes in place. Like Alex said, we need to replace the dumb work with automation.\n
And we have to do it over and over for new code and new projects. Even using tools, we are still operating in a manual workflow.\n
We need to put our robots to work! Replace the manual workflow with one that runs the tools for you, then only requires your attention when a problem is found. For static analysis, we want tools to be run automatically when code is committed. For dynamic tools, we want them to always be crawling our sites and looking for problems. The reports from these tools should go to a central location, which only alerts us when potential problems are found.\n
We need to put our robots to work! Replace the manual workflow with one that runs the tools for you, then only requires your attention when a problem is found. For static analysis, we want tools to be run automatically when code is committed. For dynamic tools, we want them to always be crawling our sites and looking for problems. The reports from these tools should go to a central location, which only alerts us when potential problems are found.\n
Once we have an automated workflow, we are happier and more relaxed. Fewer repetitive tasks means we can focus more attention on jobs that require creativity and deeper investigation.\n
Our original approach to solving the automation problem centered around Jenkins CI, an open source continuous integration server. This worked okay at first for running static analysis tools, but we needed a solution that would work with dynamic tools and we found the notification system did not fit our workflow.\n
So we have been working on our automation solution called SADB, a central service to handle all of our automated tools and reports.\nThis serves as not only a dashboard for the security team, but also handles notifying and informing developers.\n\n[old notes below]\n\nOriginally was static analysis dashboard (S A D B)\nIncorporated more results, loved calling it "SADB" so the name stuck\n\nRails. \n\nMost people use brakeman with jenkins. This has a few issues. One such issue that was a blocker to us was the scenario where a line change would trigger a new and fixed warning alert every time Some alert on delta to help reduce the noise but that potentially hides vulnerabilities. \n\nCame out of the need to manage all of the various data points we have around code security. Similar to threadfix. Also, gave us a high level overview that Jenkins could give us.\n\nRelied heavily on jenkins \nscraped images\nPosting the posting results to sadb\nAlso received data from scans during deploys\nCompletely informational\nFailed to have any meaning to developers\n\nWasn't meant to be user facing. Really just to help the team manage issues. \n\nWe wanted to manage phantom-gang findings as well so we started posting results to sadb and used ActiveAdmin to give us a simple GUI to create Jira issues and a more digestible format for all of our findings. While developers would see the tickets as a result of sadb management, we had made no progress in making it useful for someone outside our team.\n\nThen came #hackweek. We thought about it from the developers standpoint. Came up with a few stories. Built a wicked awesome mesos-based continuous integration-like system.\n
SADB is our central database of reports, and can handle input from a variety of sources, which we will describe a little later. This includes static analysis reports from Brakeman, dynamic analysis reports from Phantom Gang, CSP reports directly from browsers, and our internal code review tracking.\nSADB can then send out notifications as needed to developers and the security team.\nBecause this is a custom tool, we can more easily adapt it to take input from anywhere, and make sure the logic matches what we need for our workflow.\n
\n
Brakeman is an open source, static analysis security tool for Ruby on Rails applications. "Zero configuration"\nDetects the usual suspects: SQLi, XSS, command injection, open redirects. Also Rails-specific issues: mass assignment, model validation, default routes, CVEs. And more.\n
Brakeman was presented at last year's AppSecUSA. It has since gone from version 0.8 to 1.8.2.\nLots of improvements over the past year. Some of these improvements have come from specific use cases discovered at Twitter, either from false positives or false negatives discovered in our own apps.\n
Brakeman can be run anytime!\nAfter deploys (but why? too late.)\nAs part of QA or part of code reviews\nIntegrate into CI (Jenkins or custom)\nAs a commit hook?\nAs part of tests - rake brakeman:run\nWhat about as code is saved - with file system monitoring\n
Brakeman can be run anytime!\nAfter deploys (but why? too late.)\nAs part of QA or part of code reviews\nIntegrate into CI (Jenkins or custom)\nAs a commit hook?\nAs part of tests - rake brakeman:run\nWhat about as code is saved - with file system monitoring\n
Brakeman can be run anytime!\nAfter deploys (but why? too late.)\nAs part of QA or part of code reviews\nIntegrate into CI (Jenkins or custom)\nAs a commit hook?\nAs part of tests - rake brakeman:run\nWhat about as code is saved - with file system monitoring\n
Brakeman can be run anytime!\nAfter deploys (but why? too late.)\nAs part of QA or part of code reviews\nIntegrate into CI (Jenkins or custom)\nAs a commit hook?\nAs part of tests - rake brakeman:run\nWhat about as code is saved - with file system monitoring\n
Brakeman can be run anytime!\nAfter deploys (but why? too late.)\nAs part of QA or part of code reviews\nIntegrate into CI (Jenkins or custom)\nAs a commit hook?\nAs part of tests - rake brakeman:run\nWhat about as code is saved - with file system monitoring\n
Brakeman can be run anytime!\nAfter deploys (but why? too late.)\nAs part of QA or part of code reviews\nIntegrate into CI (Jenkins or custom)\nAs a commit hook?\nAs part of tests - rake brakeman:run\nWhat about as code is saved - with file system monitoring\n
Brakeman can be run anytime!\nAfter deploys (but why? too late.)\nAs part of QA or part of code reviews\nIntegrate into CI (Jenkins or custom)\nAs a commit hook?\nAs part of tests - rake brakeman:run\nWhat about as code is saved - with file system monitoring\n
Brakeman can be run anytime!\nAfter deploys (but why? too late.)\nAs part of QA or part of code reviews\nIntegrate into CI (Jenkins or custom)\nAs a commit hook?\nAs part of tests - rake brakeman:run\nWhat about as code is saved - with file system monitoring\n
Brakeman can be run anytime!\nAfter deploys (but why? too late.)\nAs part of QA or part of code reviews\nIntegrate into CI (Jenkins or custom)\nAs a commit hook?\nAs part of tests - rake brakeman:run\nWhat about as code is saved - with file system monitoring\n
Brakeman can be run anytime!\nAfter deploys (but why? too late.)\nAs part of QA or part of code reviews\nIntegrate into CI (Jenkins or custom)\nAs a commit hook?\nAs part of tests - rake brakeman:run\nWhat about as code is saved - with file system monitoring\n
Brakeman can be run anytime!\nAfter deploys (but why? too late.)\nAs part of QA or part of code reviews\nIntegrate into CI (Jenkins or custom)\nAs a commit hook?\nAs part of tests - rake brakeman:run\nWhat about as code is saved - with file system monitoring\n
Brakeman can be run anytime!\nAfter deploys (but why? too late.)\nAs part of QA or part of code reviews\nIntegrate into CI (Jenkins or custom)\nAs a commit hook?\nAs part of tests - rake brakeman:run\nWhat about as code is saved - with file system monitoring\n
Brakeman can be run anytime!\nAfter deploys (but why? too late.)\nAs part of QA or part of code reviews\nIntegrate into CI (Jenkins or custom)\nAs a commit hook?\nAs part of tests - rake brakeman:run\nWhat about as code is saved - with file system monitoring\n
Brakeman can be run anytime!\nAfter deploys (but why? too late.)\nAs part of QA or part of code reviews\nIntegrate into CI (Jenkins or custom)\nAs a commit hook?\nAs part of tests - rake brakeman:run\nWhat about as code is saved - with file system monitoring\n
Brakeman can be run anytime!\nAfter deploys (but why? too late.)\nAs part of QA or part of code reviews\nIntegrate into CI (Jenkins or custom)\nAs a commit hook?\nAs part of tests - rake brakeman:run\nWhat about as code is saved - with file system monitoring\n
Brakeman can be run anytime!\nAfter deploys (but why? too late.)\nAs part of QA or part of code reviews\nIntegrate into CI (Jenkins or custom)\nAs a commit hook?\nAs part of tests - rake brakeman:run\nWhat about as code is saved - with file system monitoring\n
Brakeman can be run anytime!\nAfter deploys (but why? too late.)\nAs part of QA or part of code reviews\nIntegrate into CI (Jenkins or custom)\nAs a commit hook?\nAs part of tests - rake brakeman:run\nWhat about as code is saved - with file system monitoring\n
Developer pushes code to central repo\nA mesos job pulls latest code, scans each commit\nEach scan is reported to SADB\nNotifications sent on new/fixed warnings\n
Developer pushes code to central repo\nA mesos job pulls latest code, scans each commit\nEach scan is reported to SADB\nNotifications sent on new/fixed warnings\n
Developer pushes code to central repo\nA mesos job pulls latest code, scans each commit\nEach scan is reported to SADB\nNotifications sent on new/fixed warnings\n
Developer pushes code to central repo\nA mesos job pulls latest code, scans each commit\nEach scan is reported to SADB\nNotifications sent on new/fixed warnings\n
Developer pushes code to central repo\nA mesos job pulls latest code, scans each commit\nEach scan is reported to SADB\nNotifications sent on new/fixed warnings\n
Developer pushes code to central repo\nA mesos job pulls latest code, scans each commit\nEach scan is reported to SADB\nNotifications sent on new/fixed warnings\n
Developer pushes code to central repo\nA mesos job pulls latest code, scans each commit\nEach scan is reported to SADB\nNotifications sent on new/fixed warnings\n
Developer pushes code to central repo\nA mesos job pulls latest code, scans each commit\nEach scan is reported to SADB\nNotifications sent on new/fixed warnings\n
Developer pushes code to central repo\nA mesos job pulls latest code, scans each commit\nEach scan is reported to SADB\nNotifications sent on new/fixed warnings\n
Because SADB collects reports per commit, we are able to track detailed history and trends for each application.\n
The initial large drops in warnings date from when Brakeman started getting used more heavily at Twitter.\n
Spikes in the graph tend to indicate Brakeman releases, not developers introducing new problems\n
\n
SADB allows developers or security people to drill down into reports\n
The warning details are designed first for developers following an email link, then for the security team\n
Because we scan each commit, we are able to pinpoint just how long a warning has been around.\n
We provide a link directly to the file and line number.\n
The warning also includes a snippet of the code that raised the warning, as it is interpreted by Brakeman.\n\n<neil>: you mention it's the code "as brakeman sees it" but I think that needs some more explanation. It's awesome that it resolves the variable name to it's assignments (and has even more logic to condense this - 1 + 1 + 1 -> 3)\n
We included inline documentation about the potential vulnerability as it relates to Rails.\n
We included inline documentation about the potential vulnerability as it relates to Rails.\n
Each warning has a button that allows developers to directly tell the security team that the warning is bogus.\n
Each warning has a button that allows developers to directly tell the security team that the warning is bogus.\n
We want to integrate this into our deployment more tightly so that people can't ship code without fixing warnings.\nWe're also working on static analysis tools to cover JS and Scala, especially for our internal web frameworks.\n<maybe?>SADB could trigger decider changes to disable a given feature.\n
The birth of phantom-gang is a tool that compliments our static analysis and manual efforts by scanning live web pages.\n\nIt was created to detect a few issues we're seeing over and over again.\n
These are often issues that might go undetected unless an attentive person reports such information, but are very easy to detect on a live web page. In order to hunt these down with some tenacity, we need to created a tool to look for them.\n\nMixed content can cause a variety of issues, the main one being that you lose the guarantee that the content is coming from the who you expect it to come from. Hackers can inject content, sniff cookies, etc.\n\n
These are often issues that might go undetected unless an attentive person reports such information, but are very easy to detect on a live web page. In order to hunt these down with some tenacity, we need to created a tool to look for them.\n\nMixed content can cause a variety of issues, the main one being that you lose the guarantee that the content is coming from the who you expect it to come from. Hackers can inject content, sniff cookies, etc.\n\n
These are often issues that might go undetected unless an attentive person reports such information, but are very easy to detect on a live web page. In order to hunt these down with some tenacity, we need to created a tool to look for them.\n\nMixed content can cause a variety of issues, the main one being that you lose the guarantee that the content is coming from the who you expect it to come from. Hackers can inject content, sniff cookies, etc.\n\n
These are often issues that might go undetected unless an attentive person reports such information, but are very easy to detect on a live web page. In order to hunt these down with some tenacity, we need to created a tool to look for them.\n\nMixed content can cause a variety of issues, the main one being that you lose the guarantee that the content is coming from the who you expect it to come from. Hackers can inject content, sniff cookies, etc.\n\n
In addition to traditional dynamic scanning (xss, sqli), we wanted to employ a tool tailored to the problems we are seeing rather than what the industry is focusing on.\n\nOur sites make heavy use of AJAX, which most &#x201C;curl on &#x2018;roids&#x2019;&#x201D;-based scanners cannot handle. We experience the site (almost) exactly as every user.\n\nDynamic analysis tool for finding common issues that can be detected easily in a browser environment. This is an "always on" tool that is constantly crawling our properties.\n\nFor common classes of mistakes, we create phantom-gang rules that eventually might turn into a regression framework.\n\n\n
In addition to traditional dynamic scanning (xss, sqli), we wanted to employ a tool tailored to the problems we are seeing rather than what the industry is focusing on.\n\nOur sites make heavy use of AJAX, which most &#x201C;curl on &#x2018;roids&#x2019;&#x201D;-based scanners cannot handle. We experience the site (almost) exactly as every user.\n\nDynamic analysis tool for finding common issues that can be detected easily in a browser environment. This is an "always on" tool that is constantly crawling our properties.\n\nFor common classes of mistakes, we create phantom-gang rules that eventually might turn into a regression framework.\n\n\n
Phantom-gang is a collection of node processes that spin up Phantom-JS instances (hence the name). PhantomJS is a headless webkit browser that is driven by javascript. This allows to simulate what the user would experience with full javascript support. Given a browser environment, it's really easy to test for the problems previously listed.\n\nPhantom-gang sends reports of what it finds to SADB. The management of said issues is not automatic like brakeman warnings, I'll get into that more in a bit.\n\nFrom SADB, we can create a Jira (our issue tracking software) ticket for the owners to fix.\n
Phantom-gang is a collection of node processes that spin up Phantom-JS instances (hence the name). PhantomJS is a headless webkit browser that is driven by javascript. This allows to simulate what the user would experience with full javascript support. Given a browser environment, it's really easy to test for the problems previously listed.\n\nPhantom-gang sends reports of what it finds to SADB. The management of said issues is not automatic like brakeman warnings, I'll get into that more in a bit.\n\nFrom SADB, we can create a Jira (our issue tracking software) ticket for the owners to fix.\n
Phantom-gang is a collection of node processes that spin up Phantom-JS instances (hence the name). PhantomJS is a headless webkit browser that is driven by javascript. This allows to simulate what the user would experience with full javascript support. Given a browser environment, it's really easy to test for the problems previously listed.\n\nPhantom-gang sends reports of what it finds to SADB. The management of said issues is not automatic like brakeman warnings, I'll get into that more in a bit.\n\nFrom SADB, we can create a Jira (our issue tracking software) ticket for the owners to fix.\n
Because ownership and causality of phantom-gang reports are a bit difficult, we don't know who to deliver the email to. \n\nA given host could have X number of applications with Y number of routing schemes. This makes it hard to notify the right person as well as determine duplicate reports.\n\nAn issue could easily disappear and reappear as code is deployed. Live sites have much greater entropy than static code :P\n\nWe don't want to spam the wrong person over an issue, so at the moment we are manually managing such issues. So we built a management screen (based on ActiveAdmin for anyone familiar) to help triage issues. In the future, we hope to be alerted to trends that indicate a problem as well as come up with a system that allows us to automatically create alerts based on trends.\n\n\n\nNoisy, tough automated dedup logic\nProblem may disappear and reappear frequently\nOwnership is tough (hostnames don&#x2019;t map 1-1 with projects)\n
- Open sourcing\n- might be springboard of JS analysis, it's useful because we have resolved JS dependencies and we have a full javascript which is handled by asset packaging.\n- Could incorporate etsy-style xss testing\n- Integrate with a CSP policy &#x201C;extractor&#x201D;\n- Servicify - allow people to request a scan for a given site/page\n
Content security policy defines what can "run" on a page and any deviation creates an alert. And Twitter was an early adopter. \n\nWe saw that this could not only potentially protect our users, but give a large number of data points as to what the user is experiencing. We have used CSP to help detect XSS and mixed-content by leveraging the reports sent to us by the users' browsers. This compliments the static and dynamic analysis provided by brakeman and phantom-gang in a unique way as we are receiving information from the user.\n\nWe send the CSP reports to a central scribe host (describe: massively scalable endpoint to collect and aggregate large amounts of data) which writes to hadoop file system which we can run "big data" reports against using pig/scalding. We send this information to SADB where we can search and sort more easily.\n
\nTake your CSP reports and turn them into something actionable but tune down the noise. Initially we were getting all kinds of false positives from chrome-extensions, compromised systems, etc.\nSee a lot of img-src violations with &#x201C;http&#x201D;? \nYou likely have a mixed content warning\nSee a lot of script-src violations? \nYou could be under attack. Users on other browsers don't get protection\n\n
\nTake your CSP reports and turn them into something actionable but tune down the noise. Initially we were getting all kinds of false positives from chrome-extensions, compromised systems, etc.\nSee a lot of img-src violations with &#x201C;http&#x201D;? \nYou likely have a mixed content warning\nSee a lot of script-src violations? \nYou could be under attack. Users on other browsers don't get protection\n\n
A report from one of our wonderful whitehat reporters gave us a drop of happiness when he said that a successful xss attempt had been thwarted by CSP. \nTRANSITION: we took stock of what headers were implemented on our properties, and we were not satisfied. They were applied inconsistently and a by a variety of one-off methods and it is often difficult to tune even if you are very familiar\n
While this doesn't exactly fit in with the theme of a central place to see information, the application of a consistent CSP header lead to the creation of a library to apply the rest of the headers.\n\nHSTS ensures that a given page will only be loaded over SSL, which is handled by the browser. HSTS is unique from most headers as those concerned with performance and security on the same side: you save a round trip/redirect.\n\nHSTS basically tells a website to only serve a page over SSL once the header is set (usually for a long period of time). This helps mitigate SSLStrip and Firesheep attacks.\n\nThis not only protects our users, but gives us justification to enforce SSL on previously non-SSL'd things.\n\nb/c we created a library to get them to use the headers\n
Twitter has had clickjacking problems in the past. While xfo does not solve all clickjacking issues, it does solve a very common case and is generally a very quick win that is easy to integrate. \n
Yeah, there are some IE specific headers too. I assume they are useful.\n
\nGiven that the browsers give us some baked in security and they take a relatively small amount of effort to implement, why aren't they more common? It&#x2019;s a non-intrusive, easily configured way of ensuring that all requests get the necessary headers applied. \nWe created a gem for Rails applications, and we intend to apply the same logic to our other frameworks as well. \n
\nGiven that the browsers give us some baked in security and they take a relatively small amount of effort to implement, why aren't they more common? It&#x2019;s a non-intrusive, easily configured way of ensuring that all requests get the necessary headers applied. \nWe created a gem for Rails applications, and we intend to apply the same logic to our other frameworks as well. \n
A couple more small things we built. First, there's Threatdeck.\n
One of our teammates had built out a set of TweetDeck columns with terms like "Twitter XSS", "Twitter SSL", and "script alert". In this past, people had tweeted about vulnerabilities using these terms, which is not exactly responsible disclosure, but using these columns, he would find out about it quickly. We liked the idea so much we built out "ThreatDeck" which anyone in the company can monitor, and has a cool radar animated gif and ASCII art.\n
Finally, there's Roshambo, and this one's kind of funny.\n
In the past, people were constantly shipping code, and we simply didn't have the visibility we needed to review the important stuff. So we started using a mechanism to alert us if changes happen to critical code paths, which automatically adds us to a code review. The problem then became that we had a bunch of code reviews lined up, but sometimes they wouldn't get reviewed. Someone would have to manually collect them and review them... but who?\n
Our team staged a roshambo tournament every week, and the loser would have to collect and review all of the leftover code changes. And while this was great for team morale, we realized we could use automation to automatically collect the unreviewed changes and report them to SADB. We still have a roshambo tournament to determine who reviews them.\n