This document discusses reducing social engineering risk through a strategic approach. It recommends tracking successful social engineering incidents rather than failures, using positive rather than negative reinforcement for awareness training, and taking a multi-phased approach of social engineering testing, penetration testing, incident response, policies/procedures, education, and repeating. Specific next steps proposed include implementing email spoofing protection, disabling HTML emails, sandboxing browsers and email, using browser plugins, and regularly simulating social engineering attacks to better prepare incident responders.
2. Shower Foo
What if I say I’m not like the others
What if I say I’m not just another one
of your plays
You’re the pretender
What if I say I will
never surrender
6. An exploitation of TRUST
Someone who can leverage the trust of their
victim to gain access to sensitive information or
resources or to elicit information about those
resources
41. Strategic Next Steps
1. Alias for reporting incidents
2. Implement anti-email spoofing (SPF, DKIM, DMARC)
3. Disable HTML in SMTP (plaintext emails FTW)
4. Sandbox the browser and the email client
42. Strategic Next Steps
1. Alias for reporting incidents
2. Implement anti-email spoofing (SPF, DKIM, DMARC)
3. Disable HTML in SMTP (plaintext emails FTW)
4. Sandbox the browser and the email client
5. Browser plugins
6. Org wide web proxy
7. Alert on org relevant [phishing] domains
8. Customization of authN to mitigate cloning
43. Strategic Next Steps
1. Alias for reporting incidents
2. Implement anti-email spoofing (SPF, DKIM, DMARC)
3. Disable HTML in SMTP (plaintext emails FTW)
4. Sandbox the browser and the email client
5. Browser plugins
6. Org wide web proxy
7. Alert on org relevant [phishing] domains
8. Customization of authN to mitigate cloning
9. Application whitelisting
10. Encrypt sensitive data (in transit & at rest)
11. Enforce a VPN when not on internal network
12. Perform regular simulated SE for a more prepared IR team
Who has performed social engineering assessments?
Who has been a victim of social engineering? Are you sure?
Who has a concern that their organization may be or may have already been a victim of a successful social engineering attack?
The inspiration for this talk came from a few experiences during social engineering engagements.
Also from listening to the Foo Fighters in the shower.
Also from the pleasure and opportunity to work with one of the best social engineering talents in the industry.
Why don’t most organization want to perform regular SE assessments?
Have they given up?
NEVER SURRENDER TO THE PRETENDERS
Christina – Aussie. First time in Pittsburgh. Already saw her first large pine cone, first white tailed deer, first lightening bug. DEFCON SE CTF CHAMP! Going for as many rings as the Pittsburgh Steelers
Rob – Pittsburgher. Born and raised here. Lives in ATL now. Started his hobby and now profession in security right here at PA2600 in the Pitt Student Union ~12 years ago. Lots of familiar faces here. Come ask me about my IRC handle and fun times at Summercon 2004.
DO IT!
As always, I’ll start with a definition of Social Engineering. Taken from social-engineer.org, it’s defined as “The act of manipulating people into performing actions or divulging confidential information..” It’s a blend of science, psychology and art and it taps into basic human emotions and looks at why we react the way we do.
Social engineers study people. Truly committed social engineers will study a lot about body language, voice control, vocal indicators and group dynamics. It’s also a study of individual personality types that come out through body language and vocal cues.
A more simple definition of Social Engineering would be “an exploitation of TRUST” – someone who can leverage the trust of their victim to gain access to sensitive information or resources or to elicit information about those resources.
The use of social engineering is successful because it preys not on technology, but on the inherent weaknesses of the human component. This is done by manipulating the human victim with messages that exploit your trust, pique your interests and desires, and evoke a range of strong human emotions such as fear, anxiety, trust, human interest and reward.
To make this simple – we are professional liars.
And people are vulnerable, lazy, and we want to be helpful and be noticed., making people an especially enticing target. As a really simple example.. we can spend hours, weeks or months trying to brute force our way to a password… when a phone call with the right pretext and right questions can get you the same password or more in a few minutes.
And that’s exactly it. SE is the path of least resistance.
So.. utilising techniques such as planning the right pretext (which is creating and using a contrived scenario), exploiting trust, and appealing to someone’s emotions often results in obtaining the same piece of information - it is almost unbelievable what you can achieve by simply asking, looking or posing as someone.
And as software vendors get more and more secure and their products get harder to crack, the role of social engineering becomes greater. And so, we need to understand what it is a social engineer will try, how they will try it and what methodology they may use.
But let me tell you why I care. Social engineering is undoubtedly one of the weakest links in the domain of information security, simply because it is beyond technological control and subject to human nature. We can’t necessarily control the way each individual thinks and reacts, which makes it much more challenging aspect of security to handle.
To keep it simple, people are the root of all evil, and we are the reason for all security issues.
And this is because there is NO patch for human stupidity.
When you combine people with technology, you will encounter problems. People are unreliable.
Technical systems are reviewed, scanned and pentested.
But…. How about people? How do we measure vulnerabilities in people?
We don’t. At least we don’t do it effectively.
We like to make people feel shameful. We like to pass blame. We like to IGNORE the problem while not doing anything to help the situation.
I once had a client tell me that they did not want to do a social engineering test simply because they KNEW that they would be vulnerable.
We don’t. At least we don’t do it effectively.
We like to make people feel shameful. We like to pass blame. We like to IGNORE the problem while not doing anything to help the situation.
I once had a client tell me that they did not want to do a social engineering test simply because they KNEW that they would be vulnerable.
We don’t. At least we don’t do it effectively.
We like to make people feel shameful. We like to pass blame. We like to IGNORE the problem while not doing anything to help the situation.
I once had a client tell me that they did not want to do a social engineering test simply because they KNEW that they would be vulnerable.
And when you combine people with technology, you get this big blob of mess.
People are unreliable. People fall victim to basic psychological and emotional needs, and people can be manipulated and persuaded.
To break this down, there’s 6 useful frames called the Cialdini 6 when it comes to SE and effectual persuasion, and those are:
Authority- we tend to be influenced by authority positions
Liking – We’re influenced by those we like
Social Proof – We look at others to determine good behaviour
Scarcity – Value is tied to availability
Reciprocation – We feel like we have an obligation to return what others provide (favours)
Commitment and Consistency – We are pressured to remain consistent with prior engagements
With one engagement at my old job, I had posed as an internal employee for a law enforcement agency and after doing some recon and finding out that the target I had chosen was on holidays – I crafted a pretext on the information that I knew… and that was that I had to come back early from my holiday in Hawaii (which I found posted on her Facebook) to urgently finish a report for my manager. And when you create that sense of urgency, they were naturally inclined to help.. And from there I had managed to get ahold of the IT department using the same sense of urgency and praising them for being so helpful. I said –
“I really need to finish this report but oh no.. I’ve forgotten my both my domain and email password because I’ve been on holidays – I’m so sorry! – Can you please help me out?”
And what was most surprising is that I got exactly what I wanted which were both passwords read to me over the phone with absolutely NO cross checking that I was who I actually said I was – other than me also being the same gender as my target.. And what was amusing is the guy, let’s call him Mike said “I’m not supposed to do this, but….” and gave me both passwords. He had also asked me what I wanted to set it to, and I said “something simple..” and he said “Ok, sure, I’ll set it to ‘Password1’ but put a dollar sign in front just to be a bit more secure” Thank you Mike!
But the point of this is.. this works because most people trust others by default and respond well to social rewards. Many people, especially customer service agents, help desk receptionists, and business assistants or secretaries who are trained to assist people and not to question the validity of each request, tend to trust others and are naturally helpful.
+ Naomi Wolf story/Robin Safe. There’s this inherit trust we put into social media and it’s absolutely terrifying what you can pull out of this.
And so to demonstrate this in a simple attack model, we really just need to gather the right information, develop a relationship with whoever your target is (be it through small talk or a common interest), exploiting that trust and executing your attack.
Enough about the fluffy stuff. What are we doing wrong? Why is this still such a big issue?
Since everyone loves statistics, a rough estimate of almost 50% of enterprises have been victim to SE attacks, even when most IT and security professionals are aware of this risk.. but aren’t doing enough to prevent or defend this risk. And regardless, SE still has a high success rate through simple means like phishing phone calls.
But it is also important to point out that due to human factors, “knowing better but not doing better” is one of the key issues that has not been fully addressed, particularly in the IS domain.
The answer to that question is everything. We are doing a lot wrong.
We like to…..
But is this really helping?
Stop using negative reinforcement. Rubbing their nose in it like a dog is demeaning and degrading.
Use positive reinforcement. If a person at the organization reported an incident, track that, reward them, and make them a good example for others to follow.
How do you get folks on the defensive?
Make it easy.
Make it default.
Make it rewarding.
An organization comes to us and wants to develop SE defense for their customer support representatives (CSRs). They have 4000 CSRs and some are in the US, some are offshore, some are in-house, some are external third-parties. Currently training and simulated SE scenarios are performed ad hoc. They have had some incidents recently. Attackers are calling in and coercing CSRs into giving them access to PII and/or access to customer accounts. (Risk #1) On top of that, the CSRs are regularly receiving emails that are actually phishing scams. An email that prompts them to reset their email password is a regular occurrence. (Risk #2) Occasionally the CSRs get malware infections on their terminal, the source is not always clear but it causes downtime when we have to re-image their machine (Risk #3)