2. Bodacea Light Industries, 2019
Yeah. Me.
• Lifetime includes: Big data. Submarines. Robots. Cold
War information systems. Large-scale data systems.
Traffic. Human-robot cooperation. Crisismapping.
Developing world community-driven data. Consultant to
fortune100s. Pyrotechnics. Adtech. Some other stuff.
• Has a ‘big ideas’ habit (last one was “change the way
NGOs use data”). Current big idea is that misinformation
control is very similar to infosec, it’ll develop in a similar
way, and we’ll need to use similar (ongoing forever)
patterns
2
3. Bodacea Light Industries, 2019
What I do all day
• URL-based misinformation:
• Global Disinformation Index: data science
• Message-based misinformation:
• Credibility Coalition: Misinfosec WG
• Sofwerx/Arizona: misinfo red team exercise
• Sofwerx: misinfo alerting design
• Misinfosec: community
• MLsec, book etc
3
4. Bodacea Light Industries, 2019
(my) Misinformation Hacking
• Understand it
• Try to stop it happening
• Know when someone’s trying to do it
• Know when someone’s done it
• Respond to it
• (try to) stop it happening again
4
9. Bodacea Light Industries, 2019
Nationstates: Qanon campaigns
9
“Action: continuous barrage of
memes. All SM platforms
Hashtags: #HRCvideo
#releasethevideo #maga #QAnon
Use top trending hashtags along
with your posts. Share and
retweet as much as possible”
23. Bodacea Light Industries, 2019
Individual: report trolls/botnets
23
“Twitter (reportedly)
suspended over 70 million
accounts”
“Facebook created a human
crisis team after algorithms
failed it”
44. Bodacea Light Industries, 2019
Asking questions
• Is there unusual activity on hashtag x, topic y, platform z?
• What are ‘known’ bots talking about today?
• What’s the chatter in 8chan/ 4chan/ r/thedonald
RussiaToday etc
• What are the misinformation creators trying to do? What
artifacts are they likely to leave when they do it?
• What are the other trackers getting excited about today?
44
45. Bodacea Light Industries, 2019
Getting your own data
Trollbot lists:
• https://botsentinel.com/
Tools:
• https://github.com/IHJpc2V1cCAK/socint
• https://labsblog.f-secure.com/2018/02/16/searching-twitter-with-
twarc/
Existing datasets
• https://github.com/bodacea/misinfolinks
45
52. Bodacea Light Industries, 2019
And then the rest of the DS cycle
• Explore: Go play. Pull some troll data, and think about
what you’d like to know about it. Look at the hours the
trolls tweet at, the topics, the hashtags. Do they repeat
each other at all? Are there patterns? Think about
names, dates, followers/following, profiles. Are they on
existing “naughty lists”. Etc
• Model
• Iterate
• Explain (who to?)
52
56. Bodacea Light Industries, 2019
Include misinfo in infosec definitions?
56
“Prevention of damage to, protection of, and
restoration of computers, electronic
communications systems, electronic
communications services, wire communication,
and electronic communication, including
information contained therein, to ensure its
availability, integrity, authentication,
confidentiality, and nonrepudiation”
- NSPD-54
57. Bodacea Light Industries, 2019
Mapping Parallels
• As Information Security (Gordon, Grugq)
• Via Information Operations / Influence Operations (Lin etc)
• As a form of conflict
57
65. Bodacea Light Industries, 2019
Zooming out (aka naming things is hard)
• Campaigns : Advanced persistent threats
• e.g. Internet Research Agency, 2016 elections
• Incidents
• e.g. Columbia chemicals
• Failed attempts
• ?
65
66. Bodacea Light Industries, 2019
2014 Columbian Chemicals incident
66
• Summary: Early Russian (IRA) “fake news” stories. Completely fabricated; very short lifespan.
• Actor: probably IRA (source: recordedfuture)
• Timeframe: Sept 11 2014 (1 day)
• Presumed goals: test deployment
• Artifacts: text messages, images, video
• Method:
• 1. Create messages. e.g. “A powerful explosion heard from miles away happened at a chemical
plant in Centerville, Louisiana #ColumbianChemicals”
• 2. Post messages from fake twitter accounts; include handles of local and global influencers
(journalists, media, politicians, e.g. @senjeffmerkley)
• 3. Amplify, by repeating messages on twitter via fake twitter accounts
• Result: limited traction
• Counters: None seen. Fake stories were debunked very quickly.
• Related attacks: These were all well-produced fake news stories, promoted on Twitter to
influencers through a single dominant hashtag -- #BPoilspilltsunami, #shockingmurderinatlanta,
Misinformation is deliberately false information. One example is the “fake news” sites above, containing misinformation that’s used to gain advertising money, with clickbait tweets that bring people to them. Some of these currently contain the typical aliens and healthcure material, but many are political and trading on strong emotions like fear and useful divisions in society.
Image: screenshot of http://www.sawthis.one/ 2018-07-08
Misinformation is also moving from online to offline. Several times now, misinformation actors have sent invites to opposing groups to demonstrate at the same time in the same place.
https://twitter.com/JuliaDavisNews/status/994704834577215495
https://twitter.com/donie/status/957246815056908288
Misinformation is information that’s deliberately false (actually that’s disinformation, but “misinformation” as a term won). The smallest form of online misinformation is ‘joke’ viral content, for example in every disaster there’s someone who puts up an image of a shark in the street.
Image: http://www.politifact.com/truth-o-meter/statements/2017/aug/28/blog-posting/there-are-no-sharks-swimming-streets-houston-or-an/ and pretty much any major US disaster
And then, if you look, you can find organising pages for campaigns. Here are two Qanon “meme war organising page”. Qanon is a major group, but is just one of many. Note that this is from March/April, and has a specific date on it, targetting a specific event.
Familiarity backfire effect
Memory traces
Emotions = stronger traces
Here are some common brain vulnerabilities. My favourites are the familiarity backfire effect, where if you repeat a message with a negative in it, people remember the message without the negative, and that when people read, they take false information in as true before rejecting it - and in that fraction of a second, build other assertions off the false information, even if they *know* the original information is false.
Online misinformation is huge. A few hundred trolls and thousands of bots can affect millions of people at a time.
This is the scale that nationstate-run groups and pages, dedicated to creating division and confusion, typically work at.
Here are some of the Russian-owned Facebook groups shown to Congress: these high volumes of shares and interactions might include a lot of botnet activity, but are still not insignificant.
This stuff is everywhere online: the expected places (FB, twitter, reddit, eventbrite, medium etc) but also comment streams, payment and event sites.
Social media buys reach and scale. 100 good bots = long game; 10000 ba ones = short but effective
You can also use other advertising techniques, and things like that familiarity backfire. Botnets are very useful for this, and very cheap, at about $150 for a difficult-to-find “aged” set, to a few dollars per thousand for Russian recent bots. Buy the bots, use any of the handy online guides to set them up messaging or retweeting etc, or use some simple pattern matching or AI to make them harder to find.
One big weakness for attackers is that they have to tell you about themselves. They leave a lot of “artefacts” - ways to find them.
botsentinal.com
Here are some of them, including hashtags, URLs, adverts. A simple media search with twitter, tweetdeck etc will find a lot of these. On the right are the artifacts tracked as part of the Canadian elections.
There’s also a lot of content in fact check sites(Snopes etc); if you have the resources, then it’s also possible to pay someone to go look at an area being discussed.
Sometimes misinformation propagation is more subtle. These are a good place to look for that too.
Here are some of them, including hashtags, URLs, adverts. A simple media search with twitter, tweetdeck etc will find a lot of these. On the right are the artifacts tracked as part of the Canadian elections.
You *can* report to platforms. So far this has been pretty underwhelming, but if we did it at scale, it could be interesting.
What would be good in an ideal system includes:
Realtime botnet removal
Realtime troll dampening
Etc
But that’s not where we are, so here’s some others.
Two things: advertising works by putting adverts into slots on pages. We can track unlabelled political ads, we can see the fakenews pages and pages associated with them, and we can see botnets going to pages to drive up their ad revenue. For communities, you can report ads on fake pages to brands.
And as an individual, there are still things you can do. One of these is to work with other people to block misinformation sources and channels. Many anti-harassment apps can be repurposed for this.
My favourite communities are the Lithuanian elves. Formed as an anonymous online group. They fight back every day against Russian misinformation, using a combination of humour and facts. It seems to be working.
Other cool things to do include overwhelming misinformation hashtags with other content, and hacking search terms to make disambiguation pages appear above misinformation sites.
Another group that’s got some traction is VOST (Virtual Operation Support Team), a team that supports responders in disasters: VOST Panama also used humour and “fake stamps” to counter misinformation, and helped me run a deployment on this during Hurricane Irma (when people also reported misinformation to Fema and Buzzfeed).
You can also help in rebuilding damaged communities: this is The Commons Project, that uses a combination of bots, humans and peace techniques for this.
https://medium.com/@josh_emerson/ira-midterms-part-two-collection-of-russian-troll-factory-instagram-memes-5b3492108aa6
Josh is good pointer to other people: Meet the Indiana dad who hunts Russian trolls - CNNPolitics - also https://twitter.com/josh_emerson medium.com/@josh_emerson and eye_josh (u/eye_josh) - Reddit
cyber attack lifecycle, with ATT&CK phases
* Persistence – Any access, action, or configuration change to a system that gives an adversary a persistent presence on that system. Adversaries will often need to maintain access to systems through interruptions such as system restarts, loss of credentials, or other failures.
• Privilege Escalation – The result of techniques that cause an adversary to obtain a higher level of permissions on a system or network. Certain tools or actions require a higher level of privilege to work and are likely necessary at many points throughout a remote operation.
• Defense Evasion – Techniques an adversary may use for the purpose of evading detection or avoiding other defenses.
• Credential Access – Techniques resulting in the access of, or control over, system, domain, or service credentials that are used within an enterprise environment.
• Discovery – Techniques that allow an adversary to gain knowledge about a system and its internal network.
• Lateral Movement – Techniques that enable an adversary to access and control remote systems on a network. Often the next step for lateral movement is remote execution of tools introduced by an adversary.
• Execution – Techniques that result in execution of adversary-controlled code on a local or remote system.
• Collection – Techniques used to identify and gather information, such as sensitive files, from a target network prior to exfiltration.
• Exfiltration – Techniques and attributes that result or aid in an adversary removing files and information from a target network. This category also covers locations on a system or network where an adversary may look for information to exfiltrate.
• Command and Control – Techniques and attributes of how adversaries communicate with systems under their control within a target network. Examples include using legitimate protocols such as HTTP to carry C2 information.
I’m leading a team working on writing a misinformation equivalent to the ATT&CK TTP framework.
<Add zoomed-in part of ATT&CK>
… and we have to start filling these out…
Image: SANS sliding scale of cyber security
https://www.newyorker.com/magazine/2018/05/07/the-digital-vigilantes-who-hack-back
There are still a lot of bots out there, but tactics, techniques and procedures are changing rapidly: we’re starting to see an early-infosec-style split into script-kiddie style crude botnets and more carefully crafted responsive bots.
image: https://medium.com/@MediaManipulation/tracking-disinformation-by-reading-metadata-320ece1ae79b