Towards Contested Collective Intelligence
Simon Buckingham Shum, Director Connected Intelligence Centre, University of Technology Sydney
This talk is to open up a dialogue with the important work of the SWARM project. I’ll introduce the key ideas that have shaped my work on interactive software tools to make thinking visible, shareable and contestable, some of the design prototypes, and some of the lessons we’ve learnt en route.
TEST BANK For Radiologic Science for Technologists, 12th Edition by Stewart C...
Towards Contested Collective Intelligence
1. Simon Buckingham Shum
Connected Intelligence Centre • University of Technology Sydney
@sbuckshum • http://utscic.edu.au • http://Simon.BuckinghamShum.net
Towards Contested Collective Intelligence
or… A tour of the CI design space for Hypermedia Discourse
University of Melbourne • SWARM Project, 12th Sept. 2017
2. Contested Collective Intelligence...
In wicked problems, there is no master
worldview, ontology or logic
So disagreement is a necessary process and vital ingredient
We can disagree well or badly
CI tools should scaffold and improve this proess
(e.g. amplify awareness of how stakeholders are framing the problem, reading
the signals, seeing connections, and judging success)
2
De Liddo, A., Sándor, Á. and Buckingham Shum, S. (2012). Contested Collective Intelligence: Rationale, Technologies, and a Human-Machine Annotation
Study. Computer Supported Cooperative Work, 21, (4-5), pp. 417-448. http://doi.org/10.1007/s10606-011-9155-x
13. Dilemma
If users are required to structure
their contributions to a CI
repository, the effort must
provide tangible benefit
(not just potential benefits to future stakeholders)
14. Solution
(in small synchronous settings)
A skilled mapper resolves the
cost-benefit tradeoff, adding
immediate value to the
sensemaking
15. Issue Mapping (or in a meeting real-time: Dialogue
Mapping) based on Horst Rittel’s IBIS scheme
Buckingham Shum, S. (2003). The roots of computer supported argument visualization. In P. Kirschner, S. Buckingham Shum, & C. Carr (Eds.), Visualizing
Argumentation (pp. 3–24). London: Springer. ePrint: http://bit.ly/VizArgRoots
18. this simple set of moves
— combined with hypertext,
and mapping fluency —
goes a long way…
UK Research Excellence Framework (REF) 2014 Impact Case
19. Compendium software (open source)
visual hypermedia for managing the connections between ideas flexibly
Deep acknowledgements:
Jeff Conklin CogNexus Institute
Al Selvin & Maarten Sierhuis
NYNEX Science & Technology
—> Bell Atlantic —> Verizon
—> NASA
http://compendiuminstitute.net
20. 20
Structure management in Compendium
§ Associative linking
nodes in a shared context connected by graphical Map links
§ Categorical membership
nodes in different contexts connected by common attributes via metadata Tags
§ Hypertextual Transclusion
reuse of the same node in different views
§ Templates
reuse of the same structure in different views
§ HTML, XML and RDF data exports for interoperability
§ Java and SQL interfaces to add services
26. “Knowledge Artistry” (Al Selvin)
Selvin, S. & Buckingham Shum, S. (2015). Constructing Knowledge Art: An Experiential Perspective on Crafting Participatory Representations.
Morgan Claypool. http://doi.org/10.2200/S00593ED1V01Y201408HCI023
Hypermedia
Discourse
fluency at a
high level
27. 27
Mapping with IBIS Issue-templates to
harvest the firm’s collective
intelligence on Y2K contingencies
Selvin, A.M. and Buckingham Shum, S.J. (2002). Rapid Knowledge Construction: A Case Study in Corporate Contingency Planning
Using Collaborative Hypermedia. Knowledge and Process Management, 9, (2), pp.119-128.
30. 30
Generating Custom
Documents and Diagrams from
Compendium Templates
Selvin, A.M. and Buckingham Shum, S.J. (2002). Rapid Knowledge Construction: A Case Study in Corporate Contingency Planning
Using Collaborative Hypermedia. Knowledge and Process Management, 9, (2), pp.119-128.
31. 31
Using Compendium for personnel
recovery operations planning
Conversational Modelling: real time dialogue mapping combined
with model driven templates (AI+IA)
DARPA Co-OPR Project (PI: Austin Tate, AIAI, U. Edinburgh)
http://www.aiai.ed.ac.uk/project/co-opr
35. 35
Mapping with IBIS to build a NASA
science team’s collective intelligence
for planetary geological exploration
Clancey, William J.; Sierhuis, Maarten; Alena, Richard L.; Graham, Jeffrey S.; Tyree, Kim S.; Hirsh, Robert L.; Garry, W. Brent; Semple,
Abigail; Buckingham Shum, Simon J.; Shadbolt, Nigel and Rupert, Shannon M. (2007). Automating CapCom Using Mobile Agents and
Robotic Assistants. In: 1st Space Exploration Conference: Continuing the Voyage ofDiscovery, 30 Jan-1 Feb 2005 , Orlando, FL, US.
http://eprints.aktors.org/375
37. NASA remote science team tools
Scientist
(Mars)
Scientist
(Earth)
Scientist
(Earth)
Scientist
(Mars)
Scientist
(Earth)
Software Agent
Architecture
(Mars)
Compendium used as a collaboration medium at all intersections:
humans+agents reading+writing IBIS maps
38. Geology dialogue map between Earth-based scientists and ‘Mars’
Copyright, 2004,
RIACS/NASA Ames, Open
University, Southampton
University
Not to be used without
permission
39. Compendium activity plans for surface exploration, constructed by
scientists, interpreted by software agents
40. Compendium science data map, generated by software agents, for
interpretation by Mars+Earth scientists
52. …and optionally make meaningful connections
52
predictscauses
interpretation
interpretationinterpretation
interpretation
interpretation
(a hunch – no
grounding
evidence yet)
interpretation
Is pre-requisite for
54. Potentially moving towards stories that make sense of the
evidence… i.e. plausible narratives / arguments
54
Question
Answer
Supporting
Argument…
Challenging
Argument…
challengessupports
responds to
Assumption
motivates
55. Potentially moving towards stories that make sense of the
evidence… i.e. plausible narratives / arguments
55
Question
Answer
Supporting
Argument…
Challenging
Argument…
challengessupports
responds to
Hunch
motivates
59. Structured deliberation and debate in
which Questions, Evidence and
Connections are first class entities (linkable,
addressable, embeddable, contestable…)
59
60. 60
Structured deliberation and debate in which
Questions, Evidence and Connections are
first class entities (linkable, addressable, embeddable, contestable…)
63. 63
Structured deliberation and debate in which Questions,
Evidence and Connections are first class entities
(linkable, addressable, embeddable, contestable…)
64. Comparison of one’s own ideas to others
De Liddo, A., Buckingham Shum, S., Quinto, I., Bachler, M. and Cannavacciuolo, L.(2011). Discourse-Centric Learning Analytics. Proc. 1st Int. Conf. Learning Analytics &
Knowledge. Feb. 27-Mar 1, 2011, Banff
Does the learner compare his/her
own ideas to that of peers, and if so,
in what ways?
65. De Liddo, A., Buckingham Shum, S., Quinto, I., Bachler, M. and Cannavacciuolo, L. (2011). Discourse-centric learning analytics. 1st Int. Conf.
Learning Analytics & Knowledge (Banff, 27 Mar-1 Apr). ACM: New York. Eprint: http://oro.open.ac.uk/25829
What epistemic contributions are learners making in the community?
65
Rebecca is playing the
role of broker,
connecting different
peers’ contributions in
meaningful ways We now have the basis
for recommending that
you engage with
people NOT like you…
66. Evidence
Many users can make reasonable
contributions to IBIS web apps,
without training
BUT…
69. Evidence Hub: structured storytelling for students,
practitioners and researchers
Systems Learning & Leadership Evidence Hub: http://sysll.evidence-hub.net
A wizard guides the user through
the submission of a structured
story:
• What’s the Issue?
• What claim are you
making/addressing?
• What kind of evidence
supports/challenges this?
• Link it to papers/data
• Index it against the core
themes
70. Evidence Hub:
Argument Maps
Systems Learning & Leadership Evidence Hub: http://sysll.evidence-hub.net
The wizard then generates a
structured IBIS tree showing
evidence-based claims (and
disagreements)
71. Evidence Hub: professional development
http://learningemergence.net/2013/07/17/deed-elli-ai-ci-systemic-school-learning
Issue
Potential
Solution
Supporting
Evidence
(practitioner
story)
75. Pain Points prioritised by orgs who run social innovation
platforms
Hard to visualise the debate
Poor summarisation
Poor commitment to action
Sustaining participation
Shallow contributions and
unsystematic coverage
Poor idea evaluation
Effective visualisation of concepts, new ideas and
deliberations is essential for shared understanding, but
suffers both from a lack of efficient tools to create them and
from a lack of ways to reuse them across platforms and
debates
“As a user, visualisation is my biggest problem. It is often
difficult to get into the discussion at the beginning. As a
manager of these platforms, showing people what is going
on is the biggest pain point.”
76. Pain Points prioritised by orgs who run social innovation
platforms
Hard to visualise the debate
Poor summarisation
Poor commitment to action
Sustaining participation
Shallow contributions and
unsystematic coverage
Poor idea evaluation
Participants struggle to get a good overview of what is
unfolding in an online community debate. Only the most
motivated participants will commit a lot of time to reading the
debate in order to identify the key members, the most relevant
discussions, etc.
The majority of participants tend to respond unsystematically
to stimulus messages, and do not digest earlier contributions
before they make their own contribution to the debate, such is
the cognitive overhead and limited time.
77. Pain Points prioritised by orgs who run social innovation
platforms
Hard to visualise the debate
Poor summarisation
Poor commitment to action
Sustaining participation
Shallow contributions and
unsystematic coverage
Poor idea evaluation
Bringing motivated audiences to commit to action is
difficult. Enthusiasts, those who have an interest in a
subject but have yet to commit to taking action, are
left behind.
Need to prompt action in community members
Reaching a consensus was considered less important
than being enabled to act.
78. Pain Points prioritised by orgs who run social innovation
platforms
Hard to visualise the debate
Poor summarisation
Poor commitment to action
Sustaining participation
Shallow contributions and
unsystematic coverage
Poor idea evaluation
Motivating participants with widely differing levels of
commitment, expertise and availability to contribute to an
online debate is challenging and often unproductive.
Sustaining participation is more important than enlarging
participation.
“It is better to have quality input from a small group than a
lot of members but very little content”.
79. Pain Points prioritised by orgs who run social innovation
platforms
Hard to visualise the debate
Poor summarisation
Poor commitment to action
Sustaining participation
Shallow contributions and
unsystematic coverage
Poor idea evaluation
Open innovation systems tend to generate a large number of
relatively shallow ideas.
Poor collaborative refinement of ideas that could allow the
development of more refined, deeply considered
contributions.
No easy way to see which problem facets remain under-
covered.
Very partial coverage of the solution space.
80. Pain Points prioritised by orgs who run social innovation
platforms
Hard to visualise the debate
Poor summarisation
Poor commitment to action
Sustaining participation
Shallow contributions and
unsystematic coverage
Poor idea evaluation
Patchy evaluation of ideas
Poor quality justification for ideas.
Hard to see why ratings have been given.
Unclear which rationales are evidence based.
84. Problem-Goal-Exception (PGE) analysis using IBIS syntax
checking for potential weaknesses in reasoning
http://catalyst-fp7.eu/wp-content/uploads/2016/01/CATALYST_WP4_D4.2b.pdf
87. 87
“Semantic Google Scholar” — ClaimFinder
Victoria Uren, Simon Buckingham Shum, Michelle Bachler, Gary Li, (2006) Sensemaking Tools for Understanding Research Literatures: Design, Implementation and
User Evaluation. International Journal of Human Computer Studies, Vol.64, 5, (420-445).
91. L. Iandoli, I. Quinto, S. Buckingham Shum, A. De Liddo (2015), On Online Collaboration and Construction of Shared Knowledge: Assessing Mediation Capability in Computer
Supported Argument Visualization Tools, Journal of the Association for Information Science and Technology, 75 (5), pp.1052-1067
Async online IBIS Mapping + Social Cues is
better than IBIS alone in some respects
92. Async online IBIS Mapping + Social Cues is
better than IBIS alone in some respects
93. Async online IBIS Mapping + Social Cues is
better than IBIS alone in some respects
95. But the group using a Ning discussion forum
still outperforms Social-IBIS and Plain-IBIS
Mutual Understanding Perceived Effectiveness of Communication
L. Iandoli, I. Quinto, S. Buckingham Shum, A. De Liddo (2015), On Online Collaboration and Construction of Shared Knowledge: Assessing Mediation Capability in Computer
Supported Argument Visualization Tools, Journal of the Association for Information Science and Technology, 75 (5), pp.1052-1067
Debate Dashboard
socially augmented
Cohere mapping
Ning discussion forum Cohere
96. But the group using a Ning discussion forum
still outperforms Social-IBIS and Plain-IBIS
Accuracy of Prediction (commodity prices)Perceived Ease of Use
L. Iandoli, I. Quinto, S. Buckingham Shum, A. De Liddo (2015), On Online Collaboration and Construction of Shared Knowledge: Assessing Mediation Capability in Computer
Supported Argument Visualization Tools, Journal of the Association for Information Science and Technology, 75 (5), pp.1052-1067
97. Writing is endlessly expressive
and hard to improve on as a
medium for collective
reflection/argumentation
(also a social process)
98. Dilemma:
But we would still like the
machine to do some work for us
in making sense of the state of
the CI process or product
99. Solution
NLP could move us beyond simple
forum metrics, and help make sense of
the quality of contribution
100. Academic Writing Analytics: feedback on
analytical/argumentative or reflective writing
Info https://utscic.edu.au/tools/awa
101. 101
Highlighted sentences are colour-
coded according to their broad type
Sentences with Function Keys have
more precise functions (e.g. Novelty)
CIC’s automated feedback tool: analytical writing
102. CIC’s automated feedback tool: reflective writing
An early paragraph which is simply setting the scene:
103. CIC’s automated feedback tool: reflective writing
A concluding paragraph moving into professional reflection:
104. 1
CIC’s Text Analytics Pipeline (TAP)
A set of linguistic analysis modules + AWA UI —> OSS release
Preparation of texts:
text cleaning –> de-identification –> indexing –> metadata management
Analysis of texts:
• Metrics: lengths of words, sentences, paragraphs, and statistics of these
• Syllables: metrics at the word level based on syllables
• Named Entities: e.g. names of People, Places
• Statistics: e.g. noun-verb ratio
• Vocabulary: compound words, occurrences at sentence, paragraph and document leve
• Expressions: epistemic, self-critique and affective compound words
• Spelling: feedback on spelling and basic grammar
• Rhetorical moves: in analytical and reflective writing
• Complexity: measures of the complexity of words, sentences and paragraphs
105. Disputational talk
characterised by disagreement and individualised decision making. Few attempts
to pool resources, to offer constructive criticism or make suggestions.
Disputational talk also has some characteristic discourse features - short
exchanges consisting of assertions and challenges or counter assertions ('Yes, it
is.' 'No it's not!').
Cumulative talk
in which speakers build positively but uncritically on what the others have said.
Partners use talk to construct a 'common knowledge' by accumulation.
Cumulative discourse is characterised by repetitions, confirmations and
elaborations.
Mercer, N. (2004). Sociocultural discourse analysis: analysing classroom talk as a social mode of thinking. Journal of Applied Linguistics, 1(2), 137-168.
Disputational/Cumulative/Exploratory talk
106. Exploratory talk
• Partners engage critically but constructively with each other's ideas.
• Statements and suggestions are offered for joint consideration.
• These may be challenged and counter-challenged, but challenges are justified
and alternative hypotheses are offered.
• Partners all actively participate and opinions are sought and considered
before decisions are jointly made.
• Compared with the other two types, in Exploratory talk knowledge is made
more publicly accountable and reasoning is more visible in the talk.
Disputational/Cumulative/Exploratory talk
Mercer, N. (2004). Sociocultural discourse analysis: analysing classroom talk as a social mode of thinking. Journal of Applied Linguistics, 1(2), 137-168.
109. Discourse analytics on webinar textchat
-100
-50
0
50
100
9:28
9:40
9:50
10:00
10:07
10:17
10:31
10:45
11:04
11:17
11:26
11:32
11:38
11:44
11:52
12:03
Classified as
“exploratory
talk”
(more
substantive
for learning)
“non-
exploratory”
…language is used in a manner more
akin to “Exploratory Talk” (Neil
Mercer)
Ferguson, R., Wei, Z., He, Y. and Buckingham Shum, S., An Evaluation of Learning Analytics to Identify Exploratory Dialogue in Online Discussions. In:
Proc. 3rd International Conference on Learning Analytics & Knowledge (Leuven, BE, 8-12 April, 2013). ACM. http://oro.open.ac.uk/36664