3. Model of Collective Intelligence (CI):
from sensing the environment, to interpreting it, to generating good
options, to taking decisions and coordinating action...
Collec&ve(
Ac&on(
Collec&ve(
Decision(
Collec&ve(
Idea&on(
Collec&ve(
Sensemaking(
Collec&ve(
Sensing((
(
4. Model of Collective Intelligence (CI):
from sensing the environment, to interpreting it, to generating good
options, to taking decisions and coordinating action...
Collec&ve(
Ac&on(
Collec&ve(
Decision(
Collec&ve(
Idea&on(
Collec&ve(
Sensemaking(
Collec&ve(
Sensing((
(
5. When tackling complex and contested problems:
v there may not be one worldview, or clear option
v evidence can be ambiguous or of dubious
reliability requiring the construction of plausible,
possibly competing narratives;
v growth in intelligence results from learning, which
is socially constructed through different forms of
discourse, such as dialogue and debate.
Contested Collective Intelligence
(De Liddo 2012)
6. In the design space of CI systems
u where there is insufficient data to confidently compute an
answer,
u when there is ambiguity about the trustworthiness of
environmental signals,
u and uncertainty about the impact of actions,
u And there is not one view that fits all
…then a more powerful scaffolding for thinking and discourse is
required, in order to support the emergence of CI around complex
socio political dilemmas.
Contested Collective Intelligence
(De Liddo 2012)
7. (Social,
Visual
and
Argumentation-‐based
CI)
Collective
Intelligence
Online
Deliberation
Human
Dynamics
of
Engagements
Analytics, &
Visualization
Crowdsourcing
ideas, arguments
and facts
Structured Discourse and
Argumentation
Democratic
entitlements
New class of Online
Deliberation tools
Citizen Voice
Social
Innovation
Computational
Services & Dialogic
Agents
8.
9. ¡ Poor Debate: No tools to identify were
ideas contrast, where people disagree
and why... popularity vs critical thinking
10. Flat listing of posts and no insight into the logical
structure of ideas and arguments the,
such
as
the
coherence
or
evidential
basis
of
an
argument.
11. These tools are increasingly used to support online debate and facilitate citizens’
engagement in policy and decision-making. These are fundamentally chronological
views which offer:
¡ No support for idea refinement and improvement
LINK
to
PETITION:
http://www.change.org/en-‐GB/petitions/
stand-‐against-‐russia-‐s-‐brutal-‐crackdown-‐
on-‐gay-‐rights-‐urge-‐winter-‐
olympics-‐2014-‐sponsors-‐to-‐condemn-‐
anti-‐gay-‐laws
12.
LINK
to
QUORA:
http://www.quora.com/Physics/Do-‐
wormholes-‐always-‐have-‐black-‐holes-‐at-‐
the-‐beginning#answers
13. ¡ Poor Debate: No tools to identify were ideas
contrast, where people disagree and why
¡ Poor idea evaluation: No mechanisms to identify,
contribute and discuss the evidence for an idea
¡ Poor Summarization and Visualization
¡ Shallow contributions and Cognitive clutters
¡ Platform Island & Balkanization
This hampers both:
¡ quality of users’ participation and
¡ The quality of proposed ideas
¡ effective assessment of the state of the debate.
14. That make the structure and status of a dialogue or
debate visible
Coming from research on Argumentation and CSAV,
these tools make visually explicit users’ lines of
reasoning and (dis)agreements.
¡ Deliberatorium
¡ Debategraph
¡ Cohere
¡ CoPe_it!
¡ Problem&Proposals
¡ YourView
¡ The Evidence Hub
21. • referred to as “deliberative aggregators” (van Gelder 2012b)
• produce a collective viewpoint or judgment on complex societal
issues by crowdsourcing discourse
• inherent purpose to support large communities to tackle
complex issues of public concern
33. ¡ used in a real user community to map the main issues, ideas
and arguments raised in a conversation hosted on the Utopia
platform.
¡ Utopia is a large German online community on sustainable
consumption and lifestyles, which uses a content management
system (built on Synphony) with a space for online discussion
to host online debates within the virtual community.
¡ Three community managers created summary argument maps
to visualise different topics of sustainable living based on
discussions and articles on the Utopia platform
¡ The resulting argument maps were then embedded in the
Utopia website to display them to the wider online community.
34. ¡ The study lasted 6 weeks
¡ 3 argument maps were created with LiteMap
and then embedded into the Utopia website.
¡ Three mappers collaborated to the creation of
each argument map.
¡ The mappers did not receive any training
neither with the tool nor with the argumentation
mapping process, and they were not familiar
with the IBIS argumentation model.
35. ¡ A newsletter was used to announce each new map to
online community.
¡ Over the period of the testing, an advert was placed in
the sidebar of the Utopia homepage which pointed to the
testing Website to maximize visibility and promote
partecipation to the online discussion.
¡ The Webpage in which the maps were embedded
contained an explanation on how to use the map,
information on the user study, and a link to a survey for
user to provide feedback on the usability and usefulness
of the argument map to improve the sensemaking and
summarization of the online debate.
36. ¡ Over the testing period, over 800 people navigated to the
argument maps.
¡ Most traffic was generated after the newsletters were
sent, after each one, around 130 visitors came to see the
argument maps on the same day.
¡ 57 users survey. Twice as many women as men
participated in the survey.
¡ Whereas the quantitative analysis of the surveys results
is still under development, in the following we report initial
insights from the qualitative analysis of three interviews
with the argument mappers.
37. ¡ Understanding the argument mapping tasks, specifically
the challenges when collaboratively building a map in the
same virtual space (example questions: How did you
build an understanding of the task at hand?, how would
you summarize the main process you went through?)
¡ Understanding of the use of the tool (LiteMap) to carry on
the summarization and arguments structuring task
(example questions: How did you use the tool? How did
you coordinate your activities? How did you reach
agreement on the node labelling, connections and map
layout?)
¡ Reflecting on how the tool could be improved by
implementing new features which solve some of the
encountered limitation.
38. ¡ Users reckon that a bottom-up approach to create an argument map
(from harvesting the arguments and then moving to ideas and finally
defining the issues) was not an effective process.
“For the first topic we came from the lower end of all these clips and we
basically had 30 clips in front of us and we had to cluster them.
I.e. we had argument on the quality of organic food, arguments on the
different work pays and condition in organic food shops, then arguments
on strategies to attract people to buy organic etc...
…many arguments each representing a different entry point to the
conversation and all interlinked so it was hard for us to cluster them,
make it into a logic and then fit them into the argument structure. If we
just had the top down question to answer, then of course we could have
just stick to adding the argument here or there and this would have been
easy. But we had to find a good formulation to capture all the ideas that
we wanted to put in there.”
39. “It was quite hard for us to agree what each clip could
mean, it is because the discussion in the Utopia website is
completely unstructured…you have a topic, and idea and all
the argument packed as list of comments and there is no
hierarchy between them. We had to create as mapper this
hierarchy and then see how the higher elements related
with the lowers and vice-versa.
That was not very easy in practice, especially because is a
very subjective task to do. Each of us would have done it in
a different way and also the comment from some of the
viewer was that they would have done it differently than us.
That made it hard for any topic to map, and even harder for
the ill-structured topics that were less defined.”
40. ¡ the nature of the problem to map affected
the complexity of the argument mapping
process in a way that a collective
mapping procedure that may work for one
issue can fail for a different one.
¡ Background knowledge and nature of the
topic are notably key component to affect
the effectiveness of collaborative
mapping, as well as training with the tool
and knowledge of the argumentation
model.
41. ¡ If the issues are open ended and ill-defined
collaborative mapping can be trickier and even
the definition of issues as such would require a
long negotiation effort between mappers.
¡ For these cases a bottom up approach to
summarization seem to be even more
counterproductive because it forces users to
label and connect clips before the group has
clarified the higher level structure of the map
(the meta issues to map).
42. ¡ advancement of personal knowledge of the topic,
¡ Acknowledge the power of structuring ideas for personal
reflection,
¡ and future use and topic exploration.
“It was absolutely useful, especially for topics as sustainable
food in which we had no common knowledge on the many
aspects of the topic and in which there are no well established
metrics and definitions to assess it, it is very good to see how you
can cluster the issues around this topic. Also the aspects of
organic food are now laid down and I can read them and then dig
only on the arguments of the issues and ideas I am interested
in.”
“I think it is very useful to have the idea visualised and structured
and see how they come together.”
43. “The think I am not sure is how much this mapping task fits to the
Utopia’s online community because the discussion on Utopia are quite
shallow, just one two opinions and no depth even to the level of a
newspaper article. So doing argument maps based on people
contributions is not only very subjective but also not very useful. I would
find it more interesting if the arguments mapped were based on a more
scientific level. In terms of the output form our point of view (the
community managers) if we had more validated information behind the
map the map would have more value as well.”
¡ online debates, even debates about complex socio technical issues,
consist of often very shallow conversations.
“An important added value that can be played by the community
manager is the mapping of more validated information to support
participants’ learning and reflection on new aspects of a given debate.
Argument maps seems to be better suited to bring new evidence into the
conversation rather than for mapping and summarising the existing
online debate.”
44. ¡ Improving negotiation mechanisms
¡ Enabling different deliberation processes
for different issue types (open-ended vs
more focused problems)
¡ Building argument map to bring evidence
to the debate and improve deliberation
quality rather than summarization
46. ¡ De Liddo, A., & Buckingham Shum, S. (2014). New Ways of Deliberating Online: An Empirical Comparison of
Network and Threaded Interfaces for Online Discussion. In E. Tambouris, A. Macintosh, & F. Bannister, Lecture
Notes in Computer Science (Vol. 8654, pp. 90–101). Springer.
¡ De Liddo, A. (2014). “Enhancing Discussion Forum with Combined Argument and Social Netwrok Analytics. In
Okada, A., Buckingham Shum, S. and Sherborne, T., (Eds) Knowledge Cartography. Springer. Second Edition.
In press.
¡ De Liddo, A. and Buckingham Shum, S. (2013) The Evidence Hub: Harnessing the Collective Intelligence of
Communities to Build Evidence-Based Knowledge, Workshop: Large Scale Ideation and Deliberation at 6th
International Conference on Communities and Technologies, Munich, Germany
¡ De Liddo, A., Sándor, Á. and Buckingham Shum, S. (2012) Contested Collective Intelligence: Rationale,
Technologies, and a Human-Machine Annotation Study, Computer Supported Cooperative Work (CSCW)
Journal : Volume 21, Issue 4 (2012), Page 417-448
¡ Buckingham Shum, Simon (2008). Cohere: Towards Web 2.0 Argumentation. In: Proc. COMMA'08: 2nd
International Conference on Computational Models of Argument, 28-30 May 2008, Toulouse, France. Available
at:http://oro.open.ac.uk/10421/
¡ De Liddo, Anna and Buckingham Shum, Simon (2010). Cohere: A prototype for contested collective intelligence.
In: ACM Computer Supported Cooperative Work (CSCW 2010) - Workshop: Collective Intelligence In
Organizations - Toward a Research Agenda, February 6-10, 2010, Savannah, Georgia, USA. Available at:
http://oro.open.ac.uk/19554/
¡ Buckingham Shum, Simon and De Liddo, Anna (2010). Collective intelligence for OER sustainability. In:
OpenED2010: Seventh Annual Open Education Conference, 2-4 Nov 2010, Barcelona, Spain. Available at:
http://oro.open.ac.uk/23352/
¡ Buckingham Shum, Simon (2007). Hypermedia Discourse: Contesting networks of ideas and arguments. In:
Priss, U.; Polovina, S. and Hill, R. eds. Conceptual Structures: Knowledge Architectures for Smart Applications.
Berlin: Springer, pp. 29–44.