Reflections on the Online Harms White Paper published in April 2019. https://www.gov.uk/government/consultations/online-harms-white-paper
Presented these slides as part of a panel. Agenda of the workshop: https://gallery.mailchimp.com/6a29a22efa92c19681485a0ee/files/f3d318a3-978e-4977-be85-971ecb97ca13/Child_Safety_Online_Agenda_v33.pdf
2. Regulating Accounts Removal
• A hostile involuntary celibates (Incels) Reddit with
40K subscribers was finally banned on 7 Nov 2017.
• Many simply moved to a new forum, Braincels,
banned a year later in October 2018.
“… steps companies should
take to proactively identify
accounts showing indicators
of CSEA activity and ensure
children are protected from
them, including disabling
accounts and informing law
enforcement where
appropriate. “
“Steps to prevent banned
users creating new accounts
in order to continue to make
inappropriate content which
violates terms of service.”
Online Harms White Paper, April 2019
(7.10, 7.24)
Misogyny on Reddit
Farrell, T; Fernandez, M; Novotny, J; Alani, H. Exploring Misogyny across the Manosphere in Reddit. ACM Web Science Conf. (WebSci’19), 2019.
4. “… we removed 8.7 million
pieces of content on
Facebook that violated our
child nudity or sexual
exploitation of children
policies, 99% of which was
removed before anyone
reported it. We also
remove accounts
that promote this type of
content.”
https://newsroom.fb.com/news/2018/10/fighting-child-exploitation/
“ .. find accounts that
engage in potentially
inappropriate interactions
with children on Facebook
so that we can remove
them and prevent
additional harm.”
Regulating Accounts Removal
• Is there a shared and agreed definition of
“inappropriate interactions with children”?
• How such interactions are detected and reported by
the different platforms?
• How could platforms cooperate to track cross-platform
interactions?
• How many of such interactions need to be present for
an account to be removed?
• What steps are taken to stop these accounts from
resurfacing?
• How many accounts were removed? For what offence?
5. • What can be reported? By whom?
• How safe do users, and especially children,
feel about reporting content and other users?
• What is best-practice in reporting service
design?
• How many reports were made? About what?
• How many reports were issued by children?
• How long did it take to process and act on
what type of reports?
• What information were given back to the
reporters?
Reporting Harm
27th August 2019
Children’s experiences of online harm and what they want to do about it
https://www.childrenscommissioner.gov.uk/2019/08/27/childrens-experiences-of-online-harm-and-what-they-want-to-do-about-it/
“The children were particularly keen to discuss the often violent nature of
many online gaming platforms, and the potential repercussions
of reporting abusive behaviour”
“The younger students told us that the reporting process could be difficult for them to navigate”
6. “In November 2018, the Home Secretary co-hosted a
‘hackathon’ event in the US with Microsoft and a range
of other tech companies, where they worked to develop
a new AI product to detect online grooming of children.
Hackathon participants analysed tens of thousands of
conversations to understand patterns used by predators.
This enabled engineers to develop technology to
automatically and accurately detect these patterns. “
Online Harms White Paper 2019
• Unclear how current detection methods were
developed, evaluated, on what data.
• How explainable the AI results are and how is the
AI-Human interaction managed on the platform?
• Such technology requires continuous
development and updating.
• Need agreed quality benchmarks and gold
standard datasets.
• Developing grooming-detection AI technology
that is accurate and work across multiple
platforms, languages, and harmful content and
scenarios, is quite challenging.
AI to Detect Online Grooming
7. Predator: hey whats up?…
Predator: I like your pic, very cute
Predator: so you're in san diego?
13-yr-old-girl: not far
Predator: ok, you like older guys?
13-yr-old-girl: thers nice or bad ppl all ages
Predator: have some pics if you want to see
Predator: do your parents look on your computer?
Predator: so are you by yourself or is someone else there with you?
Predator: so it should just be us, our little secret
Predator: so have you ever snuck out?
13-yr-old-girl: not rlly lol
Predator: yeah, what about tonight?
Predator: think you could sneak out tonight?
Predator: well if the wrong person found out then I'd be screwed
13-yr-old-girl: im not a teller lol
Predator: I know, just wouldn't want your dad to find out
Predator: if you are still up why not sneak out for a few minutes
Predator: but that's the fun of it
13-yr-old-girl: fun to sneak?
Predator: yes
Predator: so your dad doesn't know
Predator: would take a nap but I leave for bible study around 6:30
Predator: I know I'm bad, going to bible study and talking about sex with you
Predator: yeah, there's nothing wrong with us being friends, we have the same lord
remember ;)
Predator: would take me like an hour and a half to get there
Predator: see you in a little while
Extracts from ~700 message conversation over 5
months, between a groomer, and an adult pretending
to be a 13 year old girl
Cano, A., Fernández, M., Alani, H. Detecting child grooming behaviour patterns on social media. Int. Conf. Social Informatics (SocInfo), 2014, Barcelona, Spain
• Children often struggle to identify
complex grooming behaviour
• Most detection algorithms look for the
appearance of words, not behaviours
• Most methods check individual messages,
not threads
Stages of grooming
Olson, L. N., Daggs, J. L., Ellevold, B. L., and Rogers, T. K. K.. Entrapping the innocent: Toward a
theory of child sexual predators luring communication. Communication Theory, 17(3):231–251, 2007
Approach
Grooming
Trust
Development
Isolation
Physical
Approach
Physical
Approach
Isolation
Physical
Approach
Grooming Behaviour
8. Social Influence
The more
radicalisation
content a user
is exposed to
and shares, the
more likely for
them to adopt a
similar language
over time.
Fernandez, M.; Gonzalez-Pardo, A; Alani, H. Radicalisation Influence in Social Media. Journal of Web Science, 2019.
• The network is the essence of social media
platforms.
• Harm (e.g., anorexia, extremism,
misinformation, abuse) propagates across
the network, and influences recipients
over time.
• Need to protect and alert users to harmful
influences and influencers.
• Monitor and regulate the use of
networking recommendation algorithms.
Individual influence: similarity of own content to radicalisation terminology
Socialinfluence:similarityofretweeted
contenttoradicalisationterminology
Tools to measure radicalisation influence and behaviour
https://trivalent-project.eu
9. • How can we empower the children?
• What tools can we develop to help children,
and parents, assess vulnerability?
• How could platforms raise their users’
awareness to their own vulnerabilities?
https://coinform.eu/
Mensio, M., and Alani, H. MisinfoMe: Who’s Interacting with Misinformation? International Semantic Web Conference (ISWC 2019)
Exposing Vulnerability
“All users, children and adults, should be empowered
to understand and manage risks so that they can stay
safe online.” Online Harms White Paper 2019 (9.1)
“Offenders may target children based on
vulnerabilities such as mental health, or by exploiting
publicly available information from their social media
profiles.” Online Harms White Paper 2019 (Box 25)
10. • Viability study of chatbots as a
communication channel to tackle
online abuse
• Situations and context in which they
would consider using the chatbot
• Understanding socio-technical
requirements
• Acquiring users’ perspectives and
expectations
• Dialogues: vocabulary and formality
“When asked how they learn to stay safe
online, both groups were relatively critical of
their safety lessons, feeling disengaged by
lengthy and repetitive talks”
27th August 2019
Children’s experiences of online harm and
what they want to do about it
Chatbots for Tackling
Online Abuse
11. Summary
Vulnerable users are unaware of their
vulnerability
Need for more effective and
collaborative blocking of offenders
Regulating some platforms and not
others only shifts the problem
Continuous and collaborative
development of AI detection methods
Empower the children with better and
easier tools and procedures
Modern and engaging intelligent
educational tools