2. Disclaimer
• Please note that all opinions shared during today’s presentation
are solely my own, and do not reflect those of my employer, past
or future employers or my clients.
@ARRAGO2
3. Learning Objectives
Understand Best Practices for Defending Against Advanced
Persistent Threats
Identify and Understand Common Trends and Challenges
within Infosec for 2016
Mitigation of APT’s
@ARRAGO2
4. Who Am I?
• 10 Years of experience within the Infosec
Industry
Fortune 500’s
SMB’s
Telecom
Healthcare
@ARRAGO2
5. What is Blue Team
Definition:
The group responsible for
defending an enterprise, and
maintaining its security posture
against red team and actual attacks.
@ARRAGO2
6. Common Challenges in Corporations
• Allocation of Resources
• Allocation of Funding
• Time Management
• Skill Shortage
@ARRAGO2
7. Story Time!
A Tale of Two Clients…
• Client 1: A Ransomware Attack Gone WRONG
• Client 2: A Ransomware Attack PREVENTED
@ARRAGO2
8. Look for Executables
Sniff Traffic
Analyze Logs
Identify Patterns
Identify Rogue Processes, Connections,
Services, Users, Scheduled Tasks
What We Do (Defenders)
Minimize the amount of
recognizable changes
Generate Minimal Traffic
Install Multiple avenues of
Persistence
Continue to pervade a system and
obtain persistence again if discovered
What They Do
(Attackers)
@ARRAGO2
9. The Technical Issues…
Passwords
Securing the Environment
Understanding the Attacker’s Goal
@ARRAGO2
10. Passwords
(aka where most problems stem from)
• Easy to Guess Passwords
• No Real Enforcement
• No Second Level Authentication
• Enforced Policies
@ARRAGO2
11. Forget It…We’re Lazy!
(aka Headaches)
• Easy To Remember
• Reuse Old Password
• Based on easily Identifiable information
• Reuse same passwords multiple places
• We Never Learn!
@ARRAGO2
12. Securing the Environment
(The Basics…)
Patching
Hardening
TestingLogging
Aggregate
Data
Build
Situational
Awareness
@ARRAGO2
13. • Persistence
• Data exploitation
• Find default / weak passwords
• Compromise as many systems as possible
The Attacker’s Goal
@ARRAGO2
14. Lock down workstations by Group Policies
Limit network traffic
Restrict Remote SAM calls from PC’s
Disable Java
Disable Macros
Whitelist good extensions
Monitor for odd patterns or behaviors
What We Can Do
@ARRAGO2
Backups
15. In addition, Organizations such as NIST recommend the following to mitigate threats:
Apply Industry Best Practices
Vulnerability Scan
Use Emet
Disable Telnet
Disable HTTP
Ensure no Clear Text Passwords are used
No open WiFi
Use SSL Version 3
NIST- National Institute of Standards and
Technology
@ARRAGO2
16. Option 1: Minimal End User Impact
Option 2: Balanced End User Impact
Option 3: Hardened Environment (This also brings with it overhead and complexity)
Group Policies
@ARRAGO2
17. A Look Back at 2016
• Ransomware attacks primarily targeted Healthcare, Government,
and Educational Institutions
• Ransomware Variants:
Crysis
Locky
Odin
Cerber
@ARRAGO2
18. A Look Back at 2016
(Continued)
• State Sponsored Leaks
• State Sponsored Tools being sold
i.e. Equation Group
@ARRAGO2
19. A Look Back at 2016
(Continued)
• DDOS Attacks
Attackers / National States
The Good Guys
@ARRAGO2
20. Where Do We Go From Here?
• Ignore Everything We’ve Learned
OR
• Use the Knowledge we have in front of us to create change, and
secure our environment
@ARRAGO2
Hello, I’m Angelo Rago and I’ve worked various jobs throughout the years. I have worked for Fortune 500, SMB’s Telecom, and even in Helathcare. Throughout the years I have honed my skills in many different discipline and one think I have learned is in every industry the same issue persists. I’m going to try to bring you through my journey of 10 some odd years.
What is Blue Team you may ask. It is the group responsible for defending an enterprise, and maintaining its security posture against red team and actual attacks. In simpleton words. It means we are the guys who evulated and preserve the network from attacks. We try to maintain security policies and prevent breaches to the best of our abilities.
Common challenges corporations and enterprises have is allocation of resources. Normally in a company we would have resources assigned to tasks which are profitable to the company. Like a professional service or product offering. Defending an infrastructure is often viewed as a waste of resources as its an negative income to a company. This however is not always true in our infosec community as you know as we do offer professional services in hacking or prevention. However that is not what I’m speaking about. I’m speaking about that confy job you go to 8-4 everyday. Where Evil Corp has you coding away at their newest product Evil App. They would rather have a team of developers who create new products than one who creates issues for other employees and limit the CEO’s ability to view swiftonsecurity’s twitter posts.
We are seen as an obstacle and this needs to change. I firmly believe a proper budget is needed so that we are best able to prevent virus, malware, randsomeware, breaches, DDOS attacks and other attack variants. Many companies believe it is far cheaper to have an event happen then preventing it from happening. This myth is simply not true. We have seen time and time again companies who have been breached have been sued or have gone belly up. Look just two weeks ago Yahoo was sued by it’s own employees for the data breech of 2014. How about Target who had a third prty vendor cause the breach. These are just examples of breaches, fast forward to ransomeware attacks. These aren’t cause by third party vendors but your own employees trying to do what they do best everyday work. So Sally your favorite receptionist is trying to enter client data in and she suddenly gets a email from Big Boss 123 that he needs a money transfer to help in with his CRA problems. What does she do as a good employee clicks the link to try to help her favorite boss and encrypts all of her Evil Corp data files on that fancy new Synology they just put in weeks before for data backups. Did I mention every trusted her so she had admin access? Oh noes right? This is why we would need proper funding allocated for IT so unpredictable events such as this can be prevented. From experience I have fund abit of resources put towards prevention over time is far cheaper than reactive which proves to be 2 – 3 times more expensive over the course of time.
Another issue we have is burning out our guys. We give them a million tasks to do however they are usually unrealistic in a 8 hour day. This not only causes burn out, it promotes mistakes and shortcuts to be taken. I’m a believer in the work you put in is the work you get out of something however. I’ve had some very questionable situations where they didn’t care about what you put in but what came out was in quantity. Welcome folks to 2010+ MSP’s are running wild and all Evil Corp cares about is their financials at the end of the year to impress their investors. What we need to do is a fundamental change in the way we view our employees. They are assets to Evil Corp just as much as the products they sell. We need to invest in their skills, not just technical but their ambitions in where they’d like to go as well if we are to grow our industry and prevent burn out. One issue we seem to have today is noone is willing to teach skills to our young members entering into our field. We all had this opportunity to learn from a mentor but someone out of university today? This isn’t the case in most cases. A company rather hire a vet who has been through the trenches at a lower salary than teach our young generation. I have many issues with this however I won’t bore you guys here at Bsides Toronto with this and get back to my Blue Team talk against that Evil Corp. Where wa I again???... Oh yeah, we need to help develop our colleagues and junior members into the ideal infosec professional so one day they may help develop us dinosaurs in our ways and perhaps for once we might actually have a real security posture which can withstand the worst things Evil Corp can throw at us.
I have a story for you. Client 1 had called one out of the blue on I think it was a sunny Tuesday morning. Ring Ring our phone rings. I can’t seem to read any of our scans. What do you mean we say to our clients. Once you receive them in they are pdf’s you have acrobat reader its impossible. After abit of back and fourth we find out that the whole site is using admin credentials for simple things like SMB mounts. Why? Because when this site this was originally setup the logic was nothing could go wrong you remember those xp days too right? No cares just get er done. Well it came back to haunt us. Let’s say a site that should of easily been prevent was horribly bad. Anyways, We check their server is perfect we have shadow copy backups. Great lets restore them. So we do. The server came back clean or so we thought. Fast forward a few weeks later. They are hit again, the client said I think we are infected with ransomeware again. So what do we do we check. And yes they were this time when we checked their shadow copy’s we found none. What happened? The crypto infection actually turned these off, even shadow explorer was useless at this point. We check local backups infected. Luckily We will able to pull files out of memory which were not infected due to the system not being rebooted. Our backups on our trinary source would have caused data loss and prolonged downtime. For this client however this infection had causalities it did infect files we never deemed as important for a backup such as documents outside of the scope of what we do everyday but had sentimental value to our client. Why? Because noone bothered to ever ask him if you were to loose your 2000+ documents would this cause you any issues? They were always under the assumption IT does everything. ‘
Client 2:
So we receive a call that one of their machines they use has funny characters when viewing documents from it. We investigate and find a variant of locky on it. This site was unique in its setup as we just audited them and allowed user’s to write and modify however we were keeping history of files here. We were able to recover 100% of their files and restore everything down to the minute however we could never prove where the infection came from. As the randsomeware overwrote the account which wrote to it. I had a fun time explaining to the person why we were able to recover all of their files but how they got infected I couldn’t tell him past I know it came form email I just don’t know which or who clicked it. This did not go well with the client. In their minds if you can recover files why can’t you know how it came in after all “we had all of the tools”.
So this brings me to my talk. As defenders on Blue Team odd patterns we look for are executes which are running in system processes such as abc.exe or sysproc.exe. Seems legit ;) We will look for odd behaviors by sniffing traffic with tools such as wireshark, snort etc the list goes oan. This is a great high level way to know what is going on at a high level.
Of course this isn’t useful unless we enable logging. I prefer logging from multiple sources so we know where the data is coming from. Allocating logs also has another advantage we can graph patterns and see patterns which normally wouldn’t be distinguishable from just one source.
This will allow us to identify rogue patterns, connections, services, users and scheduled tasks which may be running on the infected system or systems.
On the other end of the spectrum our attackers are using the same tools we use however they are sneakier. They will try to make as little changes which are recognizable as possible. After all when was the last time when you educated a system you went through all of the files. I know I don’t I look at logs first and move on if nothing is triggered. These guys are so supplicated they learned over the years where to hide files from our automated systems.
Network traffic is another key to their success. They generate as little network traffic as possible. The more they send the higher chance they have to get caught. We can easily detect data being taken if it’s moving at 1gbps however at 10kbps how many of you look for that? Not many
These attackers will also install multiple back doors in their infected system so they have persistence. If we were to remove them they would have more then one way to ensure they still have access to the system. I’ve seen very simple scripts in windows install folders to a very sythicated setup where the files themselves seem legit.
Technical issues we face are passwords, they are
-passwords
-securing the environment
-understand the attacker’s goal
-passwords are simple
-they are weak do not conform to strong passwords i.e. lower case, upper case, number, and a special character
-password policies are not being enforced in a typical organization
-usually no second level authentication
-ensure users are following the polices outlined in your origination
-We rely on simple passwords that are easy to remember
-reuse old passwords we are used to
-use passwords made from easily guessable words such as Passw0rd
-reuse passwords on multiple website
-history have showed us with multiple websites being hacked the passwords we use aren’t good enough and we never learn. Linkedin, yahoo, etc
-patching
We need to ensure patches for software installed on the system are kept current. This is critical to stopping APT. An attacker’s go is to target a system which has poor security posture. In a typical organization it takes 6 months 180 days. In that time period a 0-day can easily identified and used by an outside party.
-hardening
Secure your environment. Do not run more than you need to and test the settings to ensure every update doesn’t break your outline goals of the organization.
-Testing
Ensure patching and hardening of the system work for end users as expected. Sometimes these two in tandem or separately can cause havoc among end users.
-Logging
Log everything, it is the best way to know what is happening within your organization,
-Aggregate Data
Multiple avenues of data gives you a better look at your organization security posture
-Build situational awareness
Training end users in patterns and behaviors of how things should work is the best way to ensure advanced persistent threats are prevented
-Attackers want to get persistence on your system. Their goal is to have 24\7 access to your system no matter what.
-they want to be able export data from your database easily. We have seen this with linkedin
-attackers usually use default and weak passwords to get into systems. We have seen this with the latest round of ransomeware attacks where they infected our systems after compromising rdp connections
-once an attacker has compromised a system they will then try to get persistence on as many other systems as possible.
-Use group policies to lock down the system.
-limit the network traffic of a specific area or pc can communicate at. Most setups do not need to operate at gigabit speeds for day to day tasks. In an event of a compromise this alone can help limit the extent of the compromise.
-restricting system calls can also help such as microsoft SAM
-although this is difficult to do in a lot of enterprise as many applications do rely on older versions of java it is also filled with holes which attackers use to compromise system .It is recommend to disable java on as many systems as possible.
-in office disabling macros will limit the viruses you may get within a given environment
-whitelisting extension is also recommended as you do not want to allow everything from running only applications you trust
-monitor for odd patterns or odd behaviors which look out of the norm
NIST recommends the following
-Use industry best practices such as ITIL
-scan for vulnerabilities regularly
-Use microsoft emet to sandbox your environment
-Disable telnet as it sends your passwords in clear text use ssh instead
-Disable http as this too sends your passwords in clear text use https instead
-Ensure your applications do not use any clear-text passwords although this may be hard a network sniffer such as wireshark is useful for this
-do not connect to any open wifi networks
-for https use ssl v3 this will ensure communications over your network are secure…ish
Understand your audience
Option 1 minimal user impact
-you want this if risk isn’t important to you. The client understands downtime and the risk of working is a bigger concern for them.
Option 2 balanced end user impact
-this is where a majority of your clients where end up in
-this is your goal
-you’d want to restrict your end users however still give them the ability to do their day to day tasks with some inconveniences
-you’d find this will also have a added bonus of end user reporting when policies break as they will feel like your looking out for their best interest
Option 3 the hardened environment
-this is the most difficult to pull off
-end users will fight you on this and applications will break and many rely on permissions they shouldn’t need such as administartor access
-> for instance I had a client where any downtime was not acceptable, however their policy with systems was every system must be heavily locked down
In the past year we were hit by viruses such as
Crysis
Locky
Odin
Cerber
Although many of these ransomeware attacks can now be decrypted for free, they have held organizations for ransom.
Fast forward to July 2016 We have a leak which is suspected to be orchestrated by Russia. They release NSA tools.
-this verifies our worst fears everything is compermised on the internet tha twe rely on.
RSA appears to be cracked, cisco vpn is nolonger secure, and the equation group says there many more tools where that came from!
Fast forward to September and We have the biggest DDOS attack in history no not by our national state friends but by hackers
They attack kerbs because of the website be reported
-we have no way to prevent this attack really, typically the only way to stop a ddos attack is to either try to block the attack at the backbone i.e reroute the traffic or to try to absorb it with an even bigger pipe
-we either learn from our mistakes
-or we are doomed to repeat them