The document discusses strategies for monitoring and measuring cloud security. It highlights that 95% of cloud security failures through 2020 will be the customer's fault according to Gartner predictions. It also summarizes that authentication and access control, unvalidated client-side input, and bad housekeeping are among the top OWASP risks. The document advocates practices like shift left security, using tools wisely, denying access by default, and ensuring proper logging.
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
Thomas Scott's talk from AWS + OWASP event "Strategies for Monitoring and Measuring Cloud Security"
1. CONFIDENTIAL DO NOT DISTRIBUTE
STRATEGIES FOR MONITORING AND
MEASURING CLOUD SECURITY
THOMAS SCOTT Solutions Consultant
Thomas.Scott@Armor.com
@dfwcloudsec
thomas-scott-cloudsec
2. CONFIDENTIAL DO NOT DISTRIBUTE 2
Wheel of Doom
From A Journey into Microservices by Hailo
3. CONFIDENTIAL DO NOT DISTRIBUTE 3
HOW DO WE PROTECT SO MANY APPS
AND SO MUCH DATA?
5. 5
Top Strategic Predictions for 2016 and Beyond – Gartner 2016
95%OF CLOUD SECURITY FAILURES
THROUGH 2020 WILL BE THE
CUSTOMER’S FAULT
😱
http://www.gartner.com/newsroom/id/3143718
27. CONFIDENTIAL DO NOT DISTRIBUTE
THANKYOU
THOMAS SCOTT Solutions Consultant
Thomas.Scott@Armor.com
@dfwcloudsec
thomas-scott-cloudsec
Notas del editor
Microservice, application, and infrastructure ecosystems are exploding in both variety and complexity.
We live in a copycat industry. On multiple occasions, I’ve heard colleagues say “Well Netflix and CapitalOne are utilizing this microservice and they’re great at DevOps…so if I use that microservice I will be great at DevOps too!”.
Unfortunately, these lines of thinking lead to two fundamental problems.
The reality is you own responsibility for securing the entirety of your application stack.
Gartner predicts that, “95% of Cloud security failures through 2020 will be the customers fault.” Failure to protect against any of the OWASP top 10 is a sadly a contributing factor to this statistic.
Application of the OWASP top 10 is critical to determining and protecting Organizations against common issues.
The top 10, as I’m sure many of you are aware, can be divided out into three categories.
For Authentication & Access Control, you’ve got Broken Authentication and Broken Access Control
Unvalidated Client-side Input incorporates 4 of the top 10.
Injection
XML External Entities
Cross-site Scripting
Insecure Deserialization
Finally, Bad Housekeeping rounds out the remaining top 10.
Sensitive Data Exposure
Security Misconfiguration
Using Components with Known Vulnerabilities
Insufficient Logging & Monitoring
However, I’m not here to teach you all the OWASP top 10 and why they are important. As I stated before, I am by no means an expert. OWASP has done all of the heavy lifting for us.
These critical security risks are not new. We all know that the bad practices and methodologies that lead to these risks are extremely common. It is up to us to follow through on eliminating these security failures.
A forward looking theme is to Do Less More Often.
Rather than retrofitting a variety of security controls and practices after code/infrastructure is running in the wild can be
Time consuming
Error prone
And most importantly it can be extremely costly
At every opportunity you get, shift security as far left in the development lifecycle as possible.
Security should be baked into development, deployment, and operations, as well as being thoroughly and frequently tested at all levels.
Shifting left encourages automation thereby reducing errors created from manual actions and codifying organizational and departmental security standards.
This slide is where most vendors love to begin pitching how their solution is going to solve all of your problems and if you’d just give them 5 minutes of your time, they’ll show you how!
Unfortunately, the mentality of buying a new tool to solve every new problem has created an unmanageable task for security professionals.
To create an effective application security program, you need to strike a balance between native and third party tools to help you.
Cloud native tools have a tremendous upside in that they often have simple integrations from an architectural perspective. The downside is in generally requires the knowledge and bandwidth of your team to manage and operationalize them.
Third party integrations and services can help eliminate the tedious and low-value aspect of SecOps for things like creating and updating rules and policies. This is generally a great benefit from a resource constraint perspective but often comes with a higher upfront cost.
With the increasing popularity of microservices and code based infrastructure, authentication and access control is becoming one of the final frontiers of security in the cloud.
Let’s talk a little about what AWS does for you natively and what things you can do to help round out your portion of the shared responsibility model.
The good news is that access to your infrastructures and AWS resources is deny by default. AWS wants to make things easy for you but don’t want you to get in too much trouble right out of the gate.
One thing to keep in mind however is that in the cloud we cannot take a perimeter security approach in regards to access control.
Authorization must be validated at more than just the request initialization level.
The wealth of information provided to you by AWS in regards to Authentication and Access control is invaluable. However, implementing best practices is where AWS stops doing all the work for you.
What are these best practices...
First start with
Enabling MFA for AWS console and application access
Disable the root account
Apply security policies to groups rather than individual users
For those who manage their IAM environment, they’ll know that AWS already has a nice Security Status section with big green check marks to know whether or not your doing what you should be doing
Although it shouldn’t have to be said, never open up an S3 bucket to the world unless it absolutely needs to be
Leverage AWS Config to use prebaked rules or build your own in order to evaluate your resource configurations against a set of rules or policies. You can then be alerted anytime a config drifts from your policy or build a Lambda trigger than will roll back the change.
Services like Cognito User Pools will help add layers of security to authentication such as
MFA via SMS or time-based OTP (one-time passwords)
Encryption at-rest and in-transit for authentication transactions
It gives you the ability to perform Checking of Compromised Credentials that will protect your users from using credentials that have been exposed from breaches of other websites
Finally you can utilize API Gateway Usage Plans to rate limit API calls made from clients
When it comes to unvalidated client side input, this is where you will get the most help.
The AWS Marketplace is full of WAFs and AWS’s own WAF offering can be easily integrate with AWS Application Load Balancers and CloudFront Distributions.
However, in many ways, technology alone is not the key. There is no flip we can simply switch on and we are secure. How we use the technology is critical to our success.
AWS has a whitepaper titled “Use AWS WAF to mitigate OWASP’s top 10 Web Application Vulnerabilities”. This will help you define baseline rules. However, remember that these rules are not exhaustive and should be used as a great starting point.
After you read the whitepaper on using the AWS WAF....actually use the AWS WAF! Implement rate-based rules to prevent specific IPs from spamming you too hard
If you can identify stolen tokens, use a token blacklist rule to block further requests with that token.
Use the built in capabilities of WAFs to implement policies to prevent file traversal
Also, consider managed rules. These managed rules will help take the operational burden of your organization from a maintenance perspective.
Now let’s wrap it up with just general bad housekeeping that is pervasive in our industry.
When it comes to Sensitive Data Exposure, the most common flaw is simply not encrypting sensitive data. That seems unbelievable but it is the reality we live in.
In order to be successful from a security perspective, you HAVE TO KNOW your environment.
Strong detective controls are crucial for security operations and forensics.
Logging is where my world and your worlds collide. Insufficient logging is the bedrock of nearly every major incident. It’s very difficult to know what happened, if there is no record of it.
AWS provides a variety of ways to log and ingest service data and to monitor and respond to log output and security findings.
A shocking revelation I’ve found throughout my conversations with peers in this industry over the last few years is that CloudTrail is not always enabled. This is unbelievable since CloudTrail is free!
A logging standard should also be built to determine what activities and sensitive information your applications do and don’t log. These logs should also have an established guideline for what the output looks like.
A big point to stress is that logging should not be used only for forensics and post-mortems. All logging should be monitored for suspicious activity and you should know how to respond in real time.
Streaming these logs to a central repository for analysis and correlation is essential. However, please keep in mind data sovereignty.
I know we’re all full and getting sleepy so I’ll wrap it up with just a few final best practices.
User Amazon Inspector to assess vulnerabilities in your environment such as insecure protocol usage or SSH misconfiguration.
Use segmentation throughout your stack to prevent unauthorized access to a Database server from anything other than an application server...and transversely any access to an app server from something other than a web server.
Encrypt S3 buckets and use HTTP headers to fail uploads that don’t use encryption.
Build workflows that refuse new unencrypted content or alert you for configurations that aren’t using encryption.
The final note for good housekeeping is to please don’t enable or install unnecessary services. This simply expands your vulnerability footprint with no value add to your organization.