Open source reduces development costs, frees internal developers to work on higher-order tasks, and accelerates time to market. Quite simply, open source is the way applications are developed today. Mike Pittenger addresses security in the age of open source in this presentation.
1. SECURITY IN THE AGE
OF OPEN SOURCE
Mike Pittenger
VP, Security Strategy
2. Open Source Changed the Way Applications are Built
10% Open
Source
20% Open
Source
50% Open
Source
Up to 90%
Open Source
1998 2005 2010
TODAY
Open Source is the modern architectureCustom & Commercial Code
Open Source Software
3. Why Use Open Source?
Open source adds tremendous
value
• Needed functionality w/o
acquisition costs
• Faster time to market
• Lower development costs
• Support from broad
communities
COMPOSITION OF SOFTWARE
TESTED BY BLACK DUCK ON
DEMAND
Open Source
Custom Code
4. Consequences Can Be Costly When
You Can’t Control What You Can’t See
OpenSSL
Introduction: 2011
Discovery: 2014
Heartbleed
GNU C Library
Introduction: 2000
Discovery: 2015
Ghost
QEMU
Introduction: 2004
Discovery: 2015
Venom
Bash
Introduction: 1989
Discovery: 2014
Shellshock
OpenSSL
Introduction: 1990's
Discovery: 2015
Freak
FREAK!
5. Why Aren’t We Finding These in Testing?
• Static analysis
• Testing of source code or binaries for unknown security vulnerabilities in custom code
• Advantages in buffer overflow, some types of SQL injection
• Provides results in source code
• Dynamic analysis
• Testing of compiled application in a staging environment to detect unknown security
vulnerabilities in custom code
• Advantages in injection errors, XSS
• Provides results by URL, must be traced to source
What’s Missing?
6. There Are No Silver Bullets
•Automated testing finds common
vulnerabilities in the code you write
• They are good, not perfect
• Different tools work better on different classes of bugs
• Many types of bugs are undetectable except by trained
security researchers
All possible
security vulnerabilities
FREAK!
Identifiable
with Static
Analysis
Identifiable
with
Dynamic
Analysis
7. What Do Security Testing Tools Miss?
• Static Analysis Tools and Dynamic Analysis Tools can be very effective in finding
bugs in the code written by internal developers.
• HOWEVER…
• They are ineffective in finding known vulnerabilities in Open Source components
• They provide a point-in-time snapshot of security
What happens when the threat landscape changes?
8. The Threat Landscape Constantly Changes
• VulnDB (Open Source Vulnerability Database)
• In 2015, over 3,000 new vulnerabilities in open source
• Since 2004, over 74,000 vulnerabilities have been disclosed by NVD.
• 63 reference automated tools
• 50 of those are for vulnerabilities reported in the tools
• 13 are for vulnerabilities that could be identified by a fuzzer
0
500
1000
1500
2000
2500
3000
3500
2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015
Open Source Vulnerabilities Reported Per Year
nvd vulndb-exclusive
9. Black Duck Open Source Security Audit Report
Highlights Security & Management Challenges
10. We Have Little Control Over How Open
Source Enters The Code Base
OPEN SOURCE
CODE
INTERNAL CODE
OUTSOURCED CODE
LEGACY CODE
REUSED CODE
SUPPLY CHAIN CODE
THIRD PARTY CODE
DELIVERED CODE
11. Open Source is an Attractive TargetOPEN SOURCE IS AN ATTRACTIVE TARGET
OPEN SOURCE IS USED EVERYWHERE
VULNERABILITIES ARE PUBLICIZEDEASY ACCESS TO SOURCE CODE
STEPS TO EXPLOIT READILY AVAILABLE
12. Who’s Responsible For Security?
Commercial Code Open Source Code
• Dedicated security researchers
• Alerting and notification infrastructure
• Regular patch updates
• Dedicated support team with SLA
• “community”-based code analysis
• Monitor newsfeeds yourself
• No standard patching mechanism
• Ultimately, you are responsible
13. How are Companies Managing Open Source Today? Not Well.HOW ARE COMPANIES MANAGING OPEN SOURCE TODAY? NOT WELL.
TRACKING VULNERABILITIES
• No single responsible entity
• Manual effort and labor intensive
• Unmanageable (11/day)
• Match applications, versions, components,
SPREADSHEET INVENTORY
• Depends on developer best effort or
memory
• Difficult maintenance
• Not source of truth
MANUAL TABULATION
• Architectural Review Board
• Occurs at end of SDLC
• High effort and low accuracy
• No controls
VULNERABILITY DETECTION
Run monthly/quarterly vulnerability
assessment tools (e.g., Nessus, Nexpose)
against all applications to identify exploitable
instances
14. Automating Five Critical Tasks and Having a
Bill of Materials Provide Distinct Advantage
INVENTORY
Open
Source
Software
MAP
Known
Security
Vulnerabilities
IDENTIFTY
License
Compliance
Risks
TRACK
Remediation
Priorities &
Progress
ALERT
New
Vulnerabilities
Affecting You
Visibility AND Control
1 2 3 4 5
15. Best Practices For Open Source
• Build and automatically enforce OSS policies
• Identify OSS components early in the SDLC
• Automatically create and maintain bills of material
• Continuously monitor threat environment for new vulnerabilities
Reqs
• OSS Policies
• Application Criticality
Ranking
• OSS Risk
Parameters
• License Risk
• Security Risk
• Operational Risk
Design
• OSS Selection
• Design Review
• License Risk
• Security Risk
• Operational Risk
Code
• OSS Detection
• Automatically detect
and alert on non-
conforming
components
• Correlation with Bills
of Material
Test
• OSS Enforcement
• Detect and alert on
non-conforming
components
• Correlation with Bills
of Material
Release
• OSS Monitoring
• Timely OSS
Vulnerability
Identification &
Reporting
• Bug Severity
• Remediation Advice
16. Key Takeaways
• Security testing is a good thing
• It identifies common vulnerabilities in the code
companies write
• Different testing methodologies are better suited for
different bug types
• Open Source Security isn’t covered by traditional tools
• Monitor for open source with known vulnerabilities, early
in the SDL
• Monitor production code for new vulnerabilities
• Security testing is a point-in-time snapshot
• New vulnerabilities may result from…
• Changes to code can change security posture
• Changes in the threat environment, even if the code hasn’t
changed
17. What Can You Do Tomorrow?
Speak with your head of application development
and find out:
• What policies exist?
• Is there a list of components?
• How are they creating the list?
• What controls do they have to ensure nothing gets
through?
• How are they tracking vulnerabilities for all components
over time?
18. 7 of the top 10 Software companies,
and 44 of the top 100
6 of the top 8 Mobile handset vendors
6 of the top 10 Investment Banks
24
Countries
240+
Employees
1,600Customers
27 of the Fortune 100
About Black Duck
Award for
Innovation
Four Years in the “Software
500” Largest Software
Companies
Gartner Group
“Cool Vendor”
“Top Place to Work,”
The Boston Globe
Six Years in a row
for Innovation
2014
The broader use of open source has been great for businesses. They relieve development teams from writing many features from scratch, lowering development costs and speeding time to market. Many of the popular open source libraries are used by thousands of organizations, and have proven their effectiveness in large, enterprise applications.
Our audits find open source in over 98% of the applications we test, and on average, about 1/3 of these code bases are comprised of open source
We’ve seen a trend recently in “named vulnerabilities”, and Heartbleed, Shellshock, Freak and the others are likely familiar to you. What do these all have in common?
Each is a vulnerability in a widely used open source component
Each existed for years without being detected by automated analysis tools and penetration testing methods.
Each was ultimately identified and disclosed by security researchers conducting manual code reviews.
If automated security analysis tools and penetration testing tools were effective at finding vulnerabilities in open source, these vulnerabilities would have been found long ago.
2 most common automated security testing methodologies are static analysis and dynamic analysis. In both instances, these tools are looking for common security vulnerabilities – unknown to the developer – in the custom code written by development engineers.
Static analysis works by scanning source code or binaries, and building a model of the applications data flow and control flow. Once built, the tools can then run pre-determined rules against the model. For example, a rule may look for an instance of a string copy, then traverse the model to determine if it is possible for the source buffer to have a value larger than the destination buffer. If so, a buffer overflow issue could be possible. Possible issues are mapped to the source code, making it simpler for developers to examine the issue, determine if it a true or false positive, and remediate.
Dynamic analysis works on running applications in a test environment, therefore by definition very late in the development lifecycle. It will also look for common bugs resulting from coding errors, often by testing inputs to the application with unexpected data. An example could be using SQL commands in a password field to check for input validation. The results from these tests are mapped to the URL of the application (which page was tested), the input, and the results. Developers must then trace the issue from the web application to the source code for verification and remediation.
These tools are very helpful in preventing common security issues in applications – but what’s missing?
Organizations should use static and dynamic analysis to find bugs in the code they write, but…
Open source vulnerabilities are too complex and too nuanced to be found by automated tools
If the tools were effective at finding vulnerabilities in open source, the vulnerabilities would have been found long ago
HeartBleed was present in OpenSSL for 2+ years, despite constant testing using automated tools
50+ vulnerabilities in OpenSSL since Heartbleed have all been found by researchers.
Vulnerabilities in open source are almost exclusively found by researchers manually inspecting the code and conducting experiments
Of the 4,000 vulnerabilities identified last year, fewer than 10 we
Very useful in identifying common security bugs in custom code
Typically responsibility of security team
Some can integrate into the build
Provide a snapshot of security vulnerabilities that each tool can identify
Exploitability of an issue can easily change
Results require review and scrubbing
#1 complaint – too many useless issues
Typically used late in the SDLC
Often require compiled application and/or test environment
re identified by automated tools
Organizations should use static and dynamic analysis to find bugs in the code they write, but…
Open source vulnerabilities are too complex and too nuanced to be found by automated tools
They provide a snapshot of the perceived security of the code base – at a single point in time.
When a product is released or deployed, security testing usually stops. After all, the code base isn’t changing, so the automated tools would just return the same results over and over again.
And while the code base may not change – the threat environment changes constantly as new vulnerabilities are discovered and disclosed. In 2014, over 7,900 new vulnerabilities were disclosed by NIST, a little over half of which were in open source components. These were often not obvious bugs, and very few were identified by automated tools. Instead, individual security researchers discovered and disclosed the issues.
Managing open source can be a challenge, since it can enter the code base in several ways. You may have policies, and even review and approve open source in design reviews, but developers may reuse internal code that includes older open source components, pull unapproved code from web-based repositories, integrate code from supply chain partners.
The end result is deployed code that contains open source, often without the knowledge or review of development managers and security teams.
MIKE: Open source is not necessarily less secure, or more secure, than commercial software. There are, however, some characteristics of open source that make it particularly attractive to attackers.
Open source is widely used by enterprises in commercial applications
Therefore, a new vulnerability in a popular project provides a target-rich environment for attackers.
Attackers have access to the code for analysis
Vulnerabilities in commercial code are exploitable, but attackers don’t have easy access to the source for analysis. That’s not the case in open source, where everyone has access. Like researchers, attackers can also identify new vulnerabilities
When new vulnerabilities are disclosed, we publish them to the world
NIST maintains the National Vulnerability database as a publicly available reference for vulnerabilities identified in software, and other sources – most notably OSVDB – focus on all identified vulnerabilities in open source.
Proof of the vulnerability (in the form of an exploit) is often included
When a vulnerability is discovered, the researcher will typically provide proof of the vulnerability in the form of exploit code, making the attackers’ job even easier
Attackers can use these as well – but if they are confused, there are typically YouTube videos available to provide step-by-step instructions
What’s the implication of using open source code? Something many organizations haven’t considered is that the support model is entirely different.
With commercial code, there are often dedicated security researchers, whose findings are put out via a robust alerting infrastructure to all their customers. Regular patches means their customers need not worry too much about remediation, as long as their patch management process is fairly robust. And most importantly, dedicated support teams are able to respond to your issues should anything happen.
With open source code, security research is often done by “white hat” hackers, academics, and the general open source community. There isn’t necessarily a clear process for making sure all code commits do not introduce new vulnerabilities.
Security issues are usually announced on newsfeeds, email lists which you need to subscribe to. There is no proactive alerting for customers since there are no “customers” in the traditional sense of the word.
When bug fixes go out, patching usually just means downloading the latest version, which may break the application. There is no one standard way of distributing patches to open source code.
And finally, the biggest challenge of all is that your engineering and security teams are ultimately responsible for the open source code you use. In case of a security incident, when it comes to open source there is no vendor you can point a finger at. That means the imperative is on you to be extra-vigilant when it comes to open source vulnerabilities.
MIKE: In short, many companies are not addressing this. The best practices we have seen in large, multi-national organizations, with mature SDLC practices, would be similar to the 3 activities listed here – question development teams about what they are using, tally the results in a spreadsheet, and react to vulnerabilities that they hear about.
Manual tabulation
Manual tabulation occurs either at design review (and is therefore dependent on developers adhering to version requirements and not adding additional functionality) or at the end of the development cycle (therefore dependent on the dev teams' memory and best efforts). In both cases, accuracy is dependent on static requirements or managers’ memories.
Accuracy at the beginning of the SDLC ignores any changes in requirements, especially in an Agile environment. It is also dependent on developers selecting the approved version of a component
Accuracy at the end of the SDLC is subject to recollection and level of effort
Maintain results in a spreadsheet
Updates to code that include new open source may not be captured
Tracking of new vulnerabilities in the components used is decentralized, at best
Manual tracking quickly becomes unmanageable
On average, 11 new vulnerabilities per day
What do you do if you have 100 internal applications, and each uses 10 open source components?
A best-practices solution would combine elements of TRUST, VERIFICATION, and MONITORING:
1 – Starting with TRUST, this is providing developers and architects a way to choose open source components that are free of known vulnerabilities, and have active community support. This is a proactive step that reduces risk downstream in the software development process, and is the most cost-effective means of risk reduction.
2 – VERIFICATION means two things, having an accurate inventory of open source and being able to map than against all known vulnerabilities, in any and all applications, at any point in the SDL
3 – MONITOR means being able to monitor the released code for newly discovered vulnerabilities and alert the right people for remediation.
Many organizations end security testing when applications are released. After all, the code base isn’t changing, nor are the security rules in the tools, so why test simply to see the same results again? However, this ignores the fact that while the code base isn’t changing, the threat environment changes constantly. With over 4,000 new vulnerabilities each year, a comprehensive solution should be continuously monitoring this constant stream of new vulnerabilities, and automatically notify you of any new vulnerabilities in the open source you used in deployed applications, including:
Which applications use the code
How critical the vulnerability is, and
How to remediate it
In summary, we’ve discussed:
OSS is pervasive and important part of app development
OSS has unique security and support challenges
existing tools don’t fill the gap
manual process isn’t sufficient
Therefore, level of risk warrants action.
If you agree this is a priority for you, the next steps are critical. Most CISOs we speak with want to find out more about the current situation at their organization. The best person to ask is often the head of application development.
What you want to know are the answers to the following questions:
What policies exist?
Is there a list of components?
How are they creating the list?
Are they tracking vulnerabilities?
How do they ensure nothing gets through?
These questions will shed light on the current state of how open source is used and managed at your organization and give you a good starting point for further discussions. What would you propose the next steps should be?