2. Security = Protecting Assets’ CIA
• Confidentiality = Keeping the Asset from those who should not have access to it
• Integrity = Protecting the Asset from damage or destruction
• Availability = Ensuring that the Asset can be accessed by those who should get to it
Asset Examples:
• Data: Customer Lists, Client PII (Personally Identifiable Information), Financial records
• Systems: Web site, Order processing system, Financial system
• Secrets: Proprietary methods or processes, Formulas
• Capabilities: People, processes, tools, data
Threat Examples:
• Hackers, Corporate espionage, System failures, Disgruntled employees, Ex-employees
Vulnerability Examples:
• Threat: Hackers: Firewalls misconfigured, Software defects, Inadequate monitoring
• Threat: Corporate Espionage: Firewalls misconfigured, Software defects, Doors not locked
• Threat: System failures: Inadequate monitoring, Lack of redundancy or back-ups
• Threat: Disgruntled employees: Lack of access control, Inadequate monitoring
• Threat: Ex-employees: Doors not locked, Credentials not revoked, Lack of access control
Exploit Example:
• A hacker using SQL Injection inserts code into your system that exports Credit Card
Page 2
4. Risk Management is a well-defined discipline. “Risk” is defined as a potential (but not certain) future
event that would have a material impact if it occurs.
Security Risk Analysis is merely Risk Management applied to Security Risks – Risks to the Confidentiality,
Integrity or Availability of Assets.
• Identifying and analyzing Assets allows us to understand the potential business impacts if those
assets are violated.
• Identifying and evaluating Threats allows us to identify the potential future events that could
threaten the Assets.
• Identifying and understanding Vulnerabilities allows us to forecast the probability that we will
actually experience those Threats.
The purpose of Risk Management is to identify mitigations (Actions that can be taken before the Risk is
realized that will reduce or eliminate it).
Mitigations for Security Risks focus on reducing or eliminating Vulnerabilities.
Security Testing focuses on finding Vulnerabilities so they can be mitigated.
Page 3
5. “Defense In Depth” refers to the necessity of a multi-pronged approach to security. Network security
and operational policies are necessary, but not sufficient in and of themselves. The defenses that are
needed against the Security Threats we face must address all of the ways in which we are vulnerable to
attack. On this page, we list several of the layers that are necessary in a Defense In Depth strategy.
This session focuses on one of those layers: Ensuring that the software we create is as free of security
vulnerabilities as we can possibly make it. Security vulnerabilities in software are always a side-effect of
certain kinds of defects that are injected during the development process. No software can ever be
guaranteed to be free of defects, so the concept of Defense In Depth is applied within software
development by addressing every part of the software development life cycle.
Page 4
6. Even Security Testing must be done “in depth”
Each of the 5 layers of security testing is likely to find different types of Vulnerabilities. The lack
of any of the Layers could allow some Security Risks to remain.
Page 5
7. 8 Design Principles, proposed by Saltzer in 1974 and still apply today.
1. Economy of mechanism. Complexity leads to errors, defects and Vulnerabilities.
2. Fail-safe defaults. Base access decisions on permission rather than exclusion.
3. Complete mediation. Every access to every object must be checked for authority.
4. Open design. Secret designs will be reviewed only by hackers.
5. Separation of privilege. e.g. Require two keys to unlock a capability.
6. Least privilege. Every program and every user has only necessary privileges.
7. Least common mechanism. Minimize mechanisms common to multiple users.
8. Psychological acceptability (Usability). So users routinely do the secure thing.
Secure Wrappers intercept calls to a flawed component and ensures that the known
Vulnerabilities can not be exploited by validating inputs and system conditions.
Input Validation Principles
1. All input sources must be identified. Not just the User interface.
2. Specify and validate data. Data must be validated against detailed specifications.
3. The specifications must address limits, minimum and maximum values, minimum and
maximum lengths, valid content, initialization and re-initialization requirements, and
encryption requirements for storage and transmission.
Page 6
8. 4. Ensure that all input meets specification as soon as possible
Continued on the next page…
Page 6
9. Security Requirement Types:
• User Authentication requirements specify the strength (vs. ease of use) of authentication. e.g.
• All users shall be authenticated using a method not less rigorous than {example type}.
• {Function} shall require the user to re-authenticate and provide a session-specific pass key.
• Access Control requirements define different levels of access allowed by type of user. e.g.
• Read-only access to the {ABC} function shall be available to users in Classes B & C only.
• Update access to the {ABC} function shall be available to users in Classes D & E only.
• {XYZ} shall be initiated by users in Classes C or E only, and confirmed by a different user in
Class E only.
• Data Confidentiality requirements specify any data that shall receive special protection. e.g.
• Credit Card numbers, security codes and expiration dates shall not be stored in any form.
• {XYZ transaction} data shall be protected from interception while it is being transmitted to
{ABC System} using a method not less rigorous than 128-bit encryption.
• Data Integrity requirements specify system actions to protect against data corruption. e.g.
• Data received from {ABC} shall be checked for corruption (e.g. by a Check-sum)
• All updates and deletions of data shall be reversible by the system administrator.
• Input Validation requirements specify how inputs from untrusted sources shall be handled. e.g.
• User search inputs shall be checked for embedded SQL and not processed if any is found.
• Data received from {ABC system} shall be validated as follows …
Page 8
10. • Monitoring and Logging requirements specify mechanisms by which the system
itself or system administrators shall be able to identify nefarious activity. e.g.
• The system shall log all login attempts (successful or not) in the event log.
• The system shall raise a security incident when the event log size changes
unexpectedly.
Risk-Based System Security Testing: See the prior page.
Page 8
11. Fuzz Testing (also called “fuzzing”) is automated negative testing of interfaces. Any interface
between a system and its environment is a potential attack vector, so all of the system’s
interfaces should be subjected to fuzzing. This includes the User Interface, files, network
connections, configuration settings (e.g. the Windows registry), etc.
White-box fuzzing uses inputs that are designed to stress the interfaces based upon their
specifications and designs. Developers are well-advised to include white-box fuzzing in their
automated testing plans to test the resiliency of their interfaces against unexpected inputs.
Black-box fuzzing is the most common form of Fuzz Testing – throwing random inputs at the
system to see how it handles them. Although Black-Box Fuzzing can be done with most
automated testing tools, there are also special-purpose tools available that are specifically
designed for Fuzzing.
Page 9
12. Use Cases are a widely used way to define system requirements. Each Use Case describes an
interaction between actors and the system, specifying what the system shall do in response to
those specific Actor actions. The key to defining system requirements with Use Cases is to
ensure the completeness of the set of Use Cases; to be sure that all cases have been defined.
Misuse Cases (also known as Abuse Cases) use the same approach to define system security
requirements. A Misuse Case may be about legitimate actor(s) trying to subvert the system, or
it may be about an attack from outside the system. Either way, the system actions in the
Misuse Case defines how the system should respond to prevent or limit the damage done by
the Misuse. As with Use Cases, the key to successfully using Misuse Cases is to ensure that all
threats or potential threats have been accounted for.
Identifying Misuse and Abuse Cases during the Requirements phase can ensure that appropriate
security controls are built into the system.
Each Misuse or Abuse case must be tested to ensure the system is resilient against it.
Page 10
13. It is valuable to analyze the product architecture with reference to each of the six STRIDE
Threats.
STRIDE is a system developed by Microsoft for thinking about computer security threats. It
provides a mnemonic for security threats in six categories:
• Spoofing – An attacker pretending to be a person or system that should legitimately interact
with the system being attacked.
• Tampering – Altering a system or its data in order to gain control or advantage.
• Repudiation – Presenting altered or false information as being legitimate.
• Information Disclosure – Publishing or taking confidential information.
• Denial of Service – Preventing a system from providing its intended service (usually by over-
consuming a resource).
• Elevation of Privilege – Gaining the ability to access data or system resources that should be
available only to more privileged users.
Any identified threat should be addressed in system design and development.
Testing must confirm that the threat has been mitigated.
Page 11
15. for your organization-specific coding standards and any new security threats the
tool vendor has not yet addressed.
Peer Reviews – Static Analyzers cannot catch all of the issues in code. For issues that
require human analysis, the most effective approach is Peer Review guided by a
Checklist.
Page 12
17. Penetration Testing is testing that simulates actual attacks, just as hackers are likely to attack.
• Many good Penetration Testing tools are available.
• Additional Penetration testing is necessary in addition to the use of tools because:
1. New attacks are being perpetrated almost daily, and no tool will be fully up-to-date.
2. The most critical parts of the system’s Attack Surface should be carefully probed.
Independent Security Review – Necessary in certain organizational contexts, e.g.
Risk appetite is low, Impacts of security incidents could be large,
Regulatory oversight requires it, or Market dynamics demand it
• Expert review of key development artifacts (e.g. Architectural document, Design
specifications, source code, test plans and test results) can reveal shortcomings.
• External Penetration Testing will subject the system to a realistic attack.
These reviews and tests can be expensive, but the benefits (security incidents avoided) can
potentially outweigh the high cost.
Release Readiness Security Review should be part one component of the over-all Release
Readiness Review. As a minimum, it should include reviewing these things.
• Ensure that all security-related activities that were planned for the project have been
Page 14
18. completed and that the outputs they should have produced are available and
complete. e.g.: Security Risk Assessment, Attack Surface Analysis, Static Code
Analysis, Penetration Testing
• Ensure that all security goals have been achieved including:
• All identified Security Risks have been mitigated as planned
• Any security issues have been adequately addressed
• The security impacts of any defects that will be released have been adequately
addressed
Page 14
20. Microsoft has been focusing on improving the security of their software since 2010, and have achieved an
order of magnitude improvement in the security of the software they produce. They have made
information about their Microsoft Security Development Lifecycle generally available for others to use.
They also provide tools to support various activities in this lifecycle.
http://www.microsoft.com/security/sdl/default.aspx
Page 16
21. The Building Security In Maturity Model (BSIMM) was developed to guide organizations as they build
their ability to develop more secure software. The BSIMM is built around the Software Security
Framework (SSF), which includes the definition of a Secure Software Development Lifecycle (SSDL). The
SSDL Touchpoints are a comprehensive set of practices that should be incorporated into a Secure
Software Development Lifecycle.
http://www.bsimm.com/online/
Page 17
22. CERT (Computer Emergency Response Team) is a division of the Software Engineering Institute (SEI), a
Federally-Funded Research and Development Center that is overseen by the US Department of Defense,
and is located at, and managed by Carnegie Mellon University (CMU) in Pittsburgh.
Cybersecurity Engineering: http://www.cert.org/cybersecurity-engineering/
Secure Coding: http://www.cert.org/secure-coding/
Page 18
24. Open Web Application Security Project (OWASP) is an open community dedicated to enabling
organizations to conceive, develop, acquire, operate, and maintain applications that can be trusted.
https://www.owasp.org/index.php/Category:OWASP_Top_Ten_Project
1. Injection – Injection flaws, such as SQL, OS, and LDAP injection occur when untrusted data is sent
to an interpreter as part of a command or query.
2. Broken Authentication & Session Management – Application functions related to authentication and
session management are often not implemented correctly.
3. Cross-Site Scripting (XSS) – XSS flaws occur whenever an application takes untrusted data and
sends it to a web browser without proper validation or escaping.
4. Insecure Direct Object References – A direct object reference occurs when a developer exposes a
reference to an internal implementation object, such as a file, directory, or database key.
5. Security Misconfiguration – Good security requires having a secure configuration defined and
deployed for the application, frameworks, application server, web server, database server, and
platform. Secure settings should be defined, implemented, and maintained, as defaults are often
insecure. Additionally, software should be kept up to date.
6. Sensitive Data Exposure – Many web applications do not properly protect sensitive data, such as
credit cards, tax IDs, and authentication credentials. Sensitive data deserves extra protection such as
encryption at rest or in transit, as well as special precautions when exchanged with the browser.
7. Missing Function Level Access Control – Most web applications verify function level access rights
before making that functionality visible in the UI. However, applications need to perform the same
access control checks on the server when each function is accessed.
8. Cross-Site Request Forgery (CSRF) – A CSRF attack forces a logged-on victim’s browser to send a
forged HTTP request, including the victim’s session cookie and any other automatically included
authentication information, to a vulnerable web application.
9. Using Components with Known Vulnerabilities – Components, such as libraries, frameworks, and
other software modules, almost always run with full privileges. Applications using components with
known vulnerabilities may undermine application defenses and enable a range of possible attacks and
impacts.
10. Unvalidated Redirects and Forwards – Web applications frequently redirect and forward users to
other pages and websites, and use untrusted data to determine the destination pages.
Page 20