This document provides an introduction to human computation and games with a purpose. It discusses the rise of crowdsourcing and how human computation utilizes human effort to perform tasks that computers cannot yet perform. Examples of difficult computational problems that humans can assist with are provided, such as medical diagnosis, object recognition, and translation. The document also outlines the history of human computation and how humans were originally used as computers before the advent of electronic computers. It compares the advantages and disadvantages of electronic computers versus human computers.
2. ABOUT THE TUTORIAL
• Crowdsourcing, Human Computation, and GWAPs are hot topics
• “Human Computation” => more than 3000 papers
• 400 in 2013
• “Crowd Sourcing” => more than 36000 papers
• 4800 in 2013
• “Games With A Purpose” => more than 1400 papers
• 162 in 2013
• This short tutorial is necessarily shallow, but
• Concrete Examples
• Lot of references and links
• An outlook on the future
• Slides and additional materials available
• http://hcgwap.blogspot.com
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
2
3. ABOUT THE SPEAKERS
ALESSANDRO BOZZON
Assistant Professor - TU Delft
http://www.alessandrobozzon.com
a.bozzon@tudelft.nl
LUCA GALLI
Ph.D. Student - Politecnico di Milano
http://www.lucagalli.me
lgalli@elet.polimi.it
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
3
• RESEARCH BACKGROUND AND INTERESTS
• Web Data Management
• Crowdsourcing and Human Computation
• Game Design
• Web Engineering and Model Driven Development
4. AGENDA
4
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
5. AGENDA
• PART 1 => CrowdSourcing and Human Computation
• Introduction
• Design of Human Computation Tasks
• Frameworks And Applications
• The Future of Human Computation
• PART 2 => Games With a Purpose
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
5
8. THE RISE OF
CROWDSOURCING
• The “….sourcing” trend, from a business
perspective [B_Tibbets2011]
• Outsourcing: Outsource the data center or
outsource application development
• Same or better quality, less effort, less money
• Offshoring: Outsourcing to developing countries
(e.g. India, China)
• offshore outsourcing
• The same quality software
at a huge discount
• CrowdSourcing: everyday people use their spare
cycles to create content, solve problems, etc.
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
8
[1990’s]
Outsourcing
[2000’s]
Offshore
outsourcing
[2010’s]
CrowdSourcing
• Human
Computation
S
A
V
I
N
G
S
Jeff Howe
[B_Wired2006]
9. THE AGE OF THE CROWD
• Distributed computing projects: UC Berkeley’s
SETI@home?
• Tapping into the unused processing power of millions of
individual computers
• “Distributed labor networks”
• Using the Internet (and Web 2.0) to exploit the spare
processing power of millions of human brains
• Successful examples?
• Open source software: a network of passionate, geeky
volunteers could write code just as well as highly paid
developers at Microsoft or Sun Microsystems
• often better
• Wikipedia: creating a sprawling and surprisingly
comprehensive online encyclopedia
• Quora, StackExchange: can’t exist without the
contributions of users
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
9
10. THE AGE OF THE CROWD
• The productive potential of millions of plugged-in enthusiasts is
attracting the attention of old-line business too
• Cheap Labor => Overseas Vs. Connected work forces
• Technological advances (from product design software to digital
video cameras) are breaking down the cost barriers that once
separated amateurs from professionals
• Smart companies in industries tap the latent talent of the
crowd
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
10
“The labor isn’t always free, but it costs a lot less than paying
traditional employees. It’s not outsourcing: it’s crowdsourcing”
11. DEFINITION OF HUMAN COMPUTATION
• “…the idea of using human effort to perform
tasks that computers cannot yet
perform, usually in an enjoyable manner.”
[Law2009]
• “…a new research area that studies the
process of channeling the vast internet
population to perform tasks or provide data
towards solving difficult problems that no
known efficient computer algorithms can yet
solve” [Chandrasekar2010]
• “…systems of computers and large numbers
of humans that work together in order to
solve problems that could not be solved by
either computers or humans alone”
(Quinn2009)
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
11
SUGGESTED VIEWS http://www.youtube.com/watch?v=tx082gDwGcM
http://www.youtube.com/watch?v=Aszl5avDtek
12. CAPTCHA
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
12
http://xkcd.com/233/
“Completely Automated Public Turing test to tell Computers and Humans Apart”
Luis von Ahn et al. 2000
13. THE HUMAN
CO-PROCESSING UNITS
(HPU) [DAVIS2010]
• Humans are a first
class computational
platform
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
13
Abstract
Computer-mediated, human micro-labor markets have
so far been treated as novelty services good for cheaply
labeling training data and easy user studies.
This paper’s primary contribution is conceptual, the
claim that these markets can be characterized as Human
co-Processing Units (HPU), and represent a first class
computational platform. In the same way that Graphics
Processing Units (GPU) represent a change in
architecture from CPU based computation, HPU based
computation is different, and deserves careful
characterization and study.
We demonstrate the value of this claim by showing
that simplistic HPU computation can be more accurate
than complex CPU based algorithms on some important
computer vision tasks. We also argue that HPU
computation can be cheaper than state-of-the-art CPU
based computation. Finally we give some examples of
characterizing the HPU as an architectural platform.
1. Introduction
This paper explores the idea that humans can be used
as a processor for certain tasks, in the same way that CPUs
and GPUs are now used. Rather than thinking of humans
as the primary director of computation, with computers as
their subordinate tools, we explicitly advocate treating
these computational platforms equally and characterizing
the performance of Human Processing Units (HPUs). We
from where. Nearly all current use of micro-outsourcing is
similar to the traditional way we might use an employed
assistant in our office, to outsource from human to human.
This proposal explicitly suggests we should quantify
performance, treat this as a new computational platform,
and build real systems which make use of HPU co-
processors for certain tasks which are too computationally
expensive, or insufficiently robust when computed on
CPUs.
A survey of other papers using micro-labor for
computer vision reveals two dominant frameworks in
Figure 1: We usually think of machines as computational tools
to help humans perform better. This paper argues that humans
are also computational tools to help machines perform better.
14. A GROWING, MULTIDISCIPLINARY FIELD…
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
14
15. … WITH A BIG MARKET…
• estimated future volume
• $454,000,000,000 per year
• 91,000,000,000 hours per year
• 45,000,000 full-time workers
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
15
16. … WITH COMPLEX RELATIONS
BETWEEN DISCIPLINES [QUINN2011]
• Crowdsourcing
• Social computing
• Collective intelligence
• Data mining
• A lot of value here!
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
16
17. COLLECTIVE INTELLIGENCE
Large groups of loosely organized people can accomplish great
things working together
• Traditional study focused on “decision making capabilities by a
large group of people”
• Taxonomical “genome” of collective intelligence
• “… groups of individuals doing things collectively that seem
intelligent” [Malone2009]
• Collective intelligence generally encompasses human
computation and social computing
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
17
18. CROWDSOURCING AND
HUMAN COMPUTATION
• “Crowdsourcing is the act of taking a job traditionally performed by
a designated agent (usually an employee) and outsourcing it to
an undefined, generally large group of people in the form of an
open call.” (Jeff Howe)
• Human computation replaces computers with humans
• Crowdsourcing replaces traditional human workers with
members of the public
• Crowdsourcing facilitates human
computation (but they are not equivalent)
• Citizen journalism, sensing, ...
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
18
19. SOCIAL COMPUTING
• Social computing is a general term for an area the
intersection of social behavior and computational systems.
• In the broad sense of the term, social computing has to do
with supporting any sort of social behavior through
computational systems.
• any social software => blogs, email, instant messaging,
social network services, wikis, …
• In the narrow sense of the term, social computing has to
do with supporting computations that are carried out
by groups of people
• collaborative filtering, online auctions, prediction markets,
reputation systems, tagging, and verification games
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
19
20. DISTINGUISHING FEATURES OF
HUMAN COMPUTATION
• Conscious Effort
• Humans are actively computing something, not merely
carrier of sensors and computational devices.
• Explicit Control
• The outcome of the computation is determined by an
algorithm, and not the natural dynamics of the crowd.
Although sometimes those constraints can be relaxed
• e.g. human computation on social networks, GWAPS
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
20
21. HISTORY OF HUMAN
COMPUTATION
The term “computer” used to refer to
humans who did computation
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
21
[Grier2005]
1700’s
Alexis Claude
de Clairaut
1800’s
Charles
Babbage
1900’s
World Wars
1940’s
ENIAC
Division of labor -- Mass production -- Professional managers
22. HISTORY OF HUMAN COMPUTATION
Alan Turing wrote in 1950:
“The idea behind digital computers may be
explained by saying that these machines are
intended to carry out any operations which could be
done by a human computer”
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
22
[Turing1950]
23. ELECTRONIC VS. HUMAN COMPUTERS
Electronic
• Fast
• Deterministic
• Arithmetic
Human
• Slow
• Inconsistent & Noisy
• But… still better at
some things
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
23
24. EMULATING HUMAN
COMPUTERS
• Computer scientists (in the artificial intelligence field)
have been trying to emulate human abilities
• Language
• Visual processing
• Reasoning
• …
• Can you think about some other hard-to-imitate human
abilities?
• Now we need humans again for the “AI-complete” tasks
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
24
of accomplishing? Or are we really targeting the $10/day
global middle class? Characterizing the HPU is addressing
exactly this question.
Unthinkable new opportunities: There are many
products and services which could exist now, but don‟t
exist, because some critical computational component is
not yet robustly and efficiently computable. If HPUs are
shown to provide a solution for some of these components,
new companies will arise, offering products we would
currently believe to be impossible (or at least too costly to
implement robustly).
Completely new applications are possible which would
be impossible to consider using current CPU based
algorithms, because they simply could not possibly be
made sufficiently robust. Consider a diet aid application
running on a smartphone. Every time you eat something,
you take a picture of it, and the application computes the
calories and other nutritional information, keeping
statistics for the user. Would current CPU based object
recognition applications be able to tell which food I‟m
eating? Doubtful. Would HPU based algorithms do better?
Probably not perfect, but perhaps well enough we can at
least imagine the service.
4. Experiments – HPU vs CPU
The experiments in this section are meant to establish
that it is meaningful to directly compare CPU and HPU
performance on standard computer vision tasks and that in
some instances HPU algorithms will outperform CPU.
Comparisons of HPU/CPU accuracy include bar code
reading, color labeling, text summarization, and gender
classification. In all cases we find that simple HPU
algorithms are competitive with published CPU based
algorithms. All HPU experiments in this paper were
performed using Amazon‟s Mechanical Turk.
4.1. Accuracy HPU vs CPU: Barcodes
Since their first commercial use in 1966 barcodes have
become the de facto standard for processing and handling
goods. Their design was optimized to maximize the
accuracy of reading with laser scanners. The introduction
return_value = HPU(image, “For the image
below, please type in the numbers you see
below the barcode”);
HPU computation is not deterministic. Thus, in
contrast to CPU computation, it is frequently easy to
improve performance simply by making multiple calls to
the same function and aggregating the results. In this
example, we iterate 6 times over the call to the HPU. This
aggregation can be done on the CPU, leading to HPU/CPU
hybrid algorithms. We implement a simple aggregator that
rejects answers with the wrong number of digits or which
fail barcode checksum, and then uses voting to determine
which of several answers is correct.
Figure 4 gives a comparison of the percentage of
barcodes detected accurately by each method. Note that
the HPU method is comparable to the best CPU method
we tested, and that the HPU/CPU joint method
Easy Hard
Figure 3: Examples of barcode images found in our Easy and
Hard datasets. Note that the Hard image has significant
blurring effects.
Barcode Recognition Accuracy: HPU and CPU methods
Method Easy (%) Hard (%)
HPU/CPU 100% 83%
HPU 92% 60%
CPU [Gallo09] 98% 54%
CPU [Tekin09] 95% 6%
CPU DataSymbol 0% 0%
CPU DTK 98% 3%
CPU OCR 59% 0%
Figure 4: A comparison of a variety of CPU and HPU based
methods for determining barcode values on both Easy and
Hard datasets. Note that the joint HPU/CPU method
outperforms either HPU or CPU based computation alone.
(Many comparison numbers from [Gallo09])
25. EXAMPLE OF “DIFFICULT” COMPUTATIONAL
PROBLEMS
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
25
Sorting
Medical Diagnosis
Object recognition
Translation
Editing
Planning
set of objects set of objects sorted
x-ray, lab tests diagnosis
Image Tag
Source sentence sentence corrected
Text Text
Goal, Constraints sequence of actions
26. THE HUMAN ADVANTAGE
• Perception
• Perception/comprehension: reconstructing information that
wasn't captured at capture-time (as in a photo or surface scan)
• Constructing/inferring information that was never recorded
using knowledge humans naturally possess
• Sketch
• Recognizing emotions
• Labeling images
• Preference/aesthetic judgments
• evaluate goodness ("beauty") for sorting or optimization
• Sims, Electric Sheep, Interactive Genetic Algorithm/Human-
Based Genetic Algorithms, [Little 2009]/[Bernstein 2011]
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
26
27. THE HUMAN ADVANTAGE
• Creativity
• search: finding images that go well together
• art projects like The Sheep Market [Koblin 2006]
• [Little 2009/10] for expanding text/jokes/shirt design
• [Yu and Nickerson 2011] for sketching chair designs (“Cooks or
Cobblers”)
• [Bernstein 2011] for posing humans
• [Kittur 2011] for wikipedia... or wikipedia
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
27
28. MODERN HUMAN COMPUTATION
The Open Mind Initiative (1999)
• “… a web-based collaborative framework for collecting large
knowledge bases from non-expert contributors.”
• “an attempt to ... harness some of the distributed human
computing power of the Internet, an idea which was then only
in its early stages.”
• It accumulated more than a million English facts from 15.000
contributors
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
28
29. MODERN HUMAN COMPUTATION
Luis Von Ahn’s Phd Thesis
• [VonAhn2005] “A paradigm for utilizing human
processing power to solve problems that
computers cannot yet solve”
• “We treat human brains as processors in a
distributed system, each performing a small part
of a massive computation.”
• “We argue that humans provide a viable, under-
tapped resource that can aid in the solution of
several important problems in practice.”
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
29
30. AREAS OF APPLICATIONS
• Data management
• Data analytics
• Training
• Collaboration and knowledge sharing
• Customer loyalty programs
• Ad network optimization
• Virtual goods and currencies.
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
30
31. CROWD-ENHANCED DATA
MANAGEMENT
Relational
• Information Extraction
• Schema Matching
• Entity Resolution
• Data spaces
• Building structured KBs
• Sorting
• Top-k
• …
Beyond Relational
• Graph Search
• Mining and Classification
• Social Media Analysis
• NLP
• Text Summarization
• Sentiment Analysis
• Search
• …
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
31
32. AMAZON MTURK
• Artificial Artificial Intelligence
• Provide a UI and Web Services API to allow developers to
easily integrate human intelligence directly into their
processing
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
32
www.mturk.com
33. TYPE OF TASKS IN M-TURK
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
33
34. CROWDFLOWER
• Labor on-demand
• Less problems with
• Worker engagement (see later)
• 27 Channels
• Quality control features
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
34
35. SOME OTHER HUMAN
COMPUTATION PLATFORMS
• CloudCrowd
• DoMyStuff
• Livework
• Clickworker
• SmartSheet
• uTest
• Elance
• oDesk
• vWorker (was rent-a-coder)
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
35
36. BUT ALSO SEVERAL OTHERS…
• Social Networks
• Q&A Systems
• Ad-hoc crowds
• …
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
36
37. ETHICS AND OPPORTUNITIES
• Developer Outsourced His Job
To China To surf Reddit
• [B_NextWeb2013]
• New “Taylorization” era?
• More later
• Duke professor uses
crowdsourcing to grade
• [B_Chronicle2009]
• We can make some cool
science!
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
37
http://www.robcottingham.ca/cartoo
n/archive/2007-08-07-
crowdsourced/
39. COMPUTATION
The process of mapping an input to an output
• Algorithm: An algorithm is a finite set of rules which gives a sequence of
operations for solving a specific type of problem, with five important
properties
• Input: quantities that are given to it initially before the algorithm
begins, or dynamically as the algorithm runs.
• Output: quantities that have a specified relation to the inputs.
• Finiteness: An algorithm must always terminate after a finite number
of steps
• Definiteness: Each step of an algorithm must be precisely defined
• Effectiveness: its operations must all be sufficiently basic that they
can in principle be done exactly and in a finite length of time by
someone using pencil and paper
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
39
OUTPUTINPUT
40. TASK
A crowdsourced data creation/manipulation/analysis activity
typically focused on a single action
(although several concurrent actions are allowed)
performed on coherent set of Objects
Also known as HIT (Human Intelligence Task)
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
40
41. EXAMPLES OF TASKS
• Recognize and identify the
people contained in a set of
image
• Input Objects: images
• Output Objects: images + bounding
boxes + names
• Annotate the named entities
contained in a book
• Input Objects: text organized in pages
• Output Objects: set of named entities
• Crop the silhouette of the
models in a set of images
• Input Objects: images
• Output Objects: images + polylines
• Create a complete list of the
restaurants nearby PoliMI
• Input Objects: none
• Output Objects: set of restaurant
names
• Evaluate the courses offered at
TUDelft
• Input Objects: set of course names
• Output Objects: course names + vote
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
41
42. PERFORMER
A human being involved in the execution of a Task
• A.k.a. workers, turkers, etc.
• The workforce
• Examples
• Amazon Mechanical Turk Workers
• Students of the IR course
• ICWE Attendees
• My Facebook friends
• Javascript Experts on Stack Overflow
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
42
43. MICROTASK
An instance of a Task, operating on a subset of its input objects,
and assigned to one or more performers for execution
• The simplest unit of execution
• Typically rewarded
• Examples
• Locate and identify the faces of the people appearing in the
following 5 images
• Order the following papers according to your preference
• Find me the email address of the following companies
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
43
44. HUMAN COMPUTATION DESIGN
• How hard is the problem? Is it efficiently
solvable?
• Trade-off between human versus
machine?
• Is the human computation algorithm
correct and efficient?
• How do we aggregate the outputs of many
human computers?
• To whom do we route each task, and
how?
• How to motivate participation, and
incentivize truthful outputs?
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
44
What
WhoHow
GOAL => Given a computational problem, design a solution using human
computers and automated computers
45. TRADE-OFF
• There is always a tradeoff between how much work the
human does and how much work the computer does.
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
45
46. EXAMPLE: SORT
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
46
Human Computation Algorithms
human-driven operation
function quicksort(A)
initialize empty lists L and G
if (length(A) ≤ 1)
returnA
pivot = A.remove(find_pivot(A));
for x inA
if compare(x,pivot)
L.add(x)
else
G.add(x)
return concatenate(quicksor t(L),pivot,quicksort(G))
function pivot(A)
return randomIndex(A);
function compare(x,pivot)
return human_compare(x,pivot)
Mechanical TurkTask
47. XKCD CROWD-SORT
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
47
http://xkcd.com/1185/
48. PROBLEM TYPES
• Simple Problems
• Computational problems solved by using a single human computation
task
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
48
• Complex Problems
• Computational problems
solved by using a set of
tasks organized according
to a given workflow
• Hybrid Problems
• Computational problems
solved by organizing
human and automatic
computation in one or
more workflows
• Human Orchestration
• Tasks are coordinated by
humans
• Automatic Orchestration
• Task are automatically
coordinated by machines
• Hybrid Orchestration
• Humans and machines
coordinate tasks
49. TYPICAL WORKFLOW
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
49
Experiment
Design
Task
Design
Task
Design
Task
Design
Task
Design
Task
Design
Task
Execution
Task
Control
Output
Aggregation
and Analysis
Iterate and Improve
50. ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
50
SIMPLE PROBLEMS
51. TASK
TASK DESIGN
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
51
μTaskTask UX
Input
Objects
Output
Objects
Design
Interface
Operations
Output
Aggregation And
Quality Control
Task
Routing
Incentives
Advertisement
(Requester) Reputation Management
52. OPERATION
TYPES
• A possible (non-exhaustive) list of human computation tasks
may include:
• Data creation/modification
• Object Recognition/Identification/Detection
• Sorting (Clustering/Ordering)
• Natural Language Processing
• State Space Exploration
• Content Generation/Submission
• User preference/opinion elicitation
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
52
53. OBJECT RECOGNITION
Recognize one or several pre-specified or learned objects
together with their 2D positions in the image or 3D poses in the
scene.
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
53
Qiang Hao, Rui Cai, Zhiwei Li, Lei Zhang, Yanwei Pang, Feng Wu, and Yong Rui.
"Efficient 2D-to-3D Correspondence Filtering for Scalable 3D Object Recognition"
54. IDENTIFICATION
• Recognize an individual instance of an object
• Identification of a specific person's face or fingerprint
• Identification of a specific vehicle
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
54
55. DETECTION
An image/text is analyzed to recognize a specific condition or
anomalies.
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
55
56. CLUSTERING
Task of grouping a set of objects in such a way that objects in the
same group (called cluster) are more similar (in some sense or
another) to each other than to those in other groups (clusters).
Task for humans: define a (subjective) similarity measure to
compare the input data with and group objects into clusters based
on it.
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
56
57. ORDERING
• Arranging items of the same kind, class, nature, etc. in some
ordered sequence, based on a particular criteria.
• define a (subjective) evaluation criteria to compare the
input data and order the objects based on the chosen
criteria.
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
57
Altwaijry,H.;Belongie,S.,"Relativerankingoffacial
attractiveness,"ApplicationsofComputerVision
(WACV),2013IEEEWorkshopon
58. TASK UI AND
INTERACTION
• Workers want to maximize their income and their reputation
• The UI is one of the most important aspect of the relationship with
workers
• Prepare to iterate
• Ask the right questions
• Keep it short and simple. Brief and concise.
• Workers may not be experts: don’t assume the same
understanding in terms of terminology
• Show examples
• Engage with the worker
• Attractiveness (worker’s attention & enjoyment)
• Workers also have intrinsic motivations => Avoid boring stuff
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
58
59. OTHER DESIGN PRINCIPLES
• Text alignment
• Legibility
• Reading level: complexity of words and sentences
• Multicultural / multilingual
• Who is the audience (e.g. target worker community)
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
59
60. AGGREGATION
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
60
• Challenges:
• Output are noisy (lack of expertise)
• Humans are not always reliable (cheating)
• Cultural context may bias the answers
• Goal: Automatic procedure to merge Micro-task results
• Assumptions:
• There exists a “True” answer
• Redundancy helps
• What to look for?
• Agreement, reliability, validity
61. WHAT IS TRUTH?
Objective truth
Exists freely or
independently from a
mind (E.g. ideas,
feelings)
• Medical diagnosis,
protein structure,
number of birds...
Cultural truth
Shared beliefs of a
group of people, often
involving perceptual
judgments.
• Is the music sad? Is
this image
pornographic? Is this
text offending? ...
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
61
62. LATENT CLASS MODEL
• Observed : HIT outputs
• Latent (hidden) : Truth, user experience, task difficulty
• Often, the matrix is incomplete
• Ground truth may never been known
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
62
O11 O12 ... O1N
O21 O22 ... O2N
... ... ... ...
OM1 OM 2 ... OMN
æ
è
ç
ç
ç
ç
ç
ö
ø
÷
÷
÷
÷
÷
Y1
Y2
...
YM
æ
è
ç
ç
ç
ç
ç
ö
ø
÷
÷
÷
÷
÷
SolutionTasks
Workers
63. MAJORITY VOTE
• Ask multiple labelers, keep majority
label as “true” label
• Assumptions
• The output that each worker
independently generates
depends on the true answer
• There is no prior information
about which categories are
more or less likely to be the
true classification
• Quality is probability of being correct
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
63
Onm
Yn
True Answer
Observed Output
M
N
64. MAJORITY VOTE
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
64
Yn = argmax
j
P(Yn = j |O)
Yn = argmax
j
P(On,m = on,m |Yn = j)P(Yn = j)
m=1
M
Õ
P(O)
Yn = argmax
j
P(On,m = on,m |Yn = j)+ j)
m=1
M
Õ
Yn µargmax
j
(1-e) 1(On,m=j) ×em=1
M
å 1(On,m¹ j)m=1
M
å
n computation Task
1-ε probability of the
correct answer
j answer
m performer
65. MAJORITY VOTING AND LABEL
QUALITY
• Quality is probability of being correct
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
65
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
1 3 5 7 9 11 13
QualityforMajorityVote
Number of labelers
66. MEASURING MAJORITY
• Some statistics
• Percentage agreement
• Cohen’s kappa (2 raters)
• Fleiss’ kappa (any number of raters)
• But what if 2 say relevant, 3 say not?
• Use expert to break ties
• Collect more judgments as needed to reduce uncertainty
• Can we try and estimate quality?
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
66
67. HIDDEN FACTORS
• Majority vote works best when workers have similar quality
• But workers can make random guesses or make mistakes
and still agree by chance
• Worker Characteristics
• Expertise (e.g., bird identification)
• Bias (e.g., mother vs college students)
• Physical Conditions (e.g., fatigue)
• Task Characteristics
• Quality (e.g., blurry pictures)
• Difficulty (e.g., transcription of non-native speech)
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
67
68. INCORPORATING WORKER
QUALITY
Objective: Medical diagnosis by
doctors
Model: Doctors have different rates
and types of errors.
• πjl
(k) defines the probability of doctor
k to declare a patient in state l when
the true state is j,
• ηil
(k) is the number of time the
clinician k gets responses I from
patient i.
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
68
Onm
Yn
True Answer
Observed
Output
M
N
πm
Worker
Characteristics
69. INCORPORATING WORKER
QUALITY
• Solution: Expectation-Maximization (EM) Algorithm
(Dawid & Skene, 1979)
• Estimate the confusion matrix AND the true state of an object
simultaneously, using the Expectation-Maximization (EM)
algorithm, which iteratively
• estimates the true states of each object by weighing the
votes of the performers according to our current estimates
of their quality (as given by the confusion matrix)
• re-estimates the confusion matrices based on the current
beliefs about the true states of each patient.
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
69
70. INCORPORATING TASK DIFFICULTY
• EXAMPLE [Welinder 2010]
• HIT: Select images containing at
least one “duck”
• Competence varies with bird
image
• Worker’s bias toward various
mistakes
• Difficulty of the image
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
70
Onm
Yn
True Answer
Observed
Output
M
N
πm
Worker
Characteristics
βn
Task
Difficulty
71. QUALITY CONTROL
• An holistic problem
• It is not only about the workers performance
• Is the question well expressed?
• Is the UI understandable?
• You may think the worker is doing a bad job, but the same worker
may think you are a lousy requester (see reputation)
• Strategies
• Beforehand => Qualification test, Screening (by
quality/competence), recruiting, training
• During => Assess worker quality “as you go”
• After : Accuracy metric, Filter, weight
• Still no success guaranteed!
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
71
72. QUALITY CONTROL
SCREENING
• Approval rate
• Typically built-in in human computation platforms
• Mechanical Turk recently introduced a Master qualification for
workers => few, and very “picky”, only for specific tasks
• Crowdflower Programmatic Gold
• It can be defeated
• Geographic restrictions / Workers community
• Also built-in
• Mechanical Turk: US / India
• Crowdflower: Mechanical Turk / Others
• White list / black list of workers
• For known superstars/spammers
• To be manually maintained
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
72
73. QUALITY CONTROL
QUALIFICATION TEST
• Prescreen workers ability to do the task (accurately)
• AMT: assign qualification to workers
• Advantages
• Great tool for controlling quality
• Disadvantages
• Extra cost to design and implement the test
• May turn off workers, hurt completion time
• Refresh the test on a regular basis
• Hard to verify subjective tasks like judging relevance
• Try creating task-related questions to get worker familiar with task
before starting task in earnest
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
73
74. QUALITY CONTROL
GOLD TESTING
• Two strategies
• Trap questions with known answers (“honey pots”)
• Measure inner-annotator agreement between workers
• An exploration-exploitation scheme:
• Explore: Learn about the quality of the workers
• Exploit: Label new examples using the quality
• Assign gold labels when benefit in learning better quality of worker
outweighs the loss for labeling a gold (known label) example
[Wang et al, WCBI 2011]
• Assign an already labeled example (by other workers) and see if it
agrees with majority [Donmez et al., KDD 2009]
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
74
75. GOLD TESTING
No significant advantage under “good conditions”
(balanced datasets, good worker quality)
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
75
10 labels per example
2 categories, 50/50
Quality range: 0.55:1.0
200 labelers
76. GOLD TESTING
Advantage under imbalanced datasets
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
76
10 labels per example
2 categories, 90/10
Quality range: 0.55:0.1.0
200 labelers
77. GOLD TESTING
Advantage with bad worker quality
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
77
5 labels per example
2 categories, 50/50
Quality range: 0.55:0.65
200 labelers
78. GOLD TESTING
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
78
10 labels per example
2 categories, 90/10
Quality range: 0.55:0.65
200 labelers
Significant advantage under “bad conditions”
(imbalanced datasets, bad worker quality)
79. QUALITY CONTROL
ADDITIONAL HEURISTICS
• Ask workers to rate the difficulty of a task
• Let workers justify answers
• Justification/feedback as quasi-captcha
• Should be optional
• Automatically verifying feedback was written by a person may be difficult
(classic spam detection task)
• Broken URL/incorrect object
• Leave an outlier in the data set
• Workers will tell you
• If somebody answers “excellent” for a broken URL => probably spammer
• Create cross-validating questions
• E.g. workers says that picture does not contain people but tags somebody
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
79
80. QUALITY CONTROL
DEALING WITH BAD WORKERS
• Pay for “bad” work instead of rejecting it?
• Pro: preserve reputation, admit if poor design at fault
• Con: promote fraud, undermine approval rating system
• Use bonus as incentive
• Pay the minimum $0.01 and $0.01 for bonus
• Better than rejecting a $0.02 task
• If spammer “caught”, block from future tasks
• May be easier to always pay, then block as needed
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
80
81. • Work of many non-experts can be aggregated to approximate the answer of an expert
• However the competence and expertise of the workers do matter
• E.g. knowledge intensive, domain specific tasks
• Experts are better at [Chi2006]
• generating better, faster and more accurate solutions
• detecting features and deeper structures in problems
• adding domain-specific and general constraints to problems
• self monitoring and judging the difficulty of the task
• choosing effective strategies
• actively seeking information and resources to solve problems
• retrieving domain knowledge with little cognitive effort.
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
81
TASK ROUTING
82. EXPERTISE DIMENSIONS
• Knowledge
• Implicit Knowledge (e.g. language, location)
• Topical Knowledge (e.g. flowers, fashion)
• Availability Reliability and Trustworthiness
• Response time
• Percentage of accepted microtask executions
• Masters in AMT
• Soft Skills
• E.g. Attitude
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
82
83. STRATEGIES
• Push : System controls the
distribution of tasks
• The worker is passive
• Workers have strict preferences
• Allocation
• Worker’s expertise is known (or
estimated).
• A coalition is a group of agents
which cooperate in order to
achieve a common task.
• (Coalition Problem). Given ⟨A,
H, T ⟩, the coalition problem is
to assign tasks t ∈ T to
coalitions of agents C ⊆ A such
that the total utility is
maximized and the precedence
order is respected.
• Pull : Workers can browse,
visualize & search data
• The workers are active, and
tend to choose tasks in which
they have the most expertise,
interest & understanding
• Advantage
• More effective on platform with
high turn-over
• Disadvantage
• Coverage & completion time
• More in “Advertisement”
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
83
84. FINDING THE RIGHT CROWD
• Crowd selection by ranking the members of a social group
according to the level of knowledge that they have about a
given topic
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
84
[Bozzon2013b]
85. MAIN RESULTS
• Profiles are less effective than level-1 resources
• Resources produced by others help in describing each
individual’s expertise
• Twitter is the most effective social network for expertise
matching – sometimes it outperforms the other social networks
• Twitter most effective in Computer Engineering, Science,
Technology & Games, Sport
• Facebook effective in Locations, Sport, Movies & TV, Music
• Linked-in never very helpful in locating expertise
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
85
Groundtruth created trough self-assessment. For expertise need, vote on 7 Likert Scale.
EXPERTS expertise above average
86. PICK-A-CROWD
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
86
PickVAVCrowd:#Tell#Me#What#You#Like,#and#I'll#Tell#You#What#to#Do
Djellel Eddine Difallah, Gianluca Demartini, and Philippe Cudré-Mauroux. Pick-A-Crowd: Tell Me
What You Like, and I'll Tell You What to Do.
In: 22nd International Conference on World Wide Web (WWW 2013)
87. LIKE VS ACCURACY
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
87
Like%vs%Accuracy%
88. INCENTIVES
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
88
“money, love, or glory”
T. W. Malone, R. Laubacher, and C. Dellarocas. Harnessing Crowds:
Mapping the Genome of Collective Intelligence. Working paper no.
2009-001, MIT Center for Collective Intelligence, Feb. 2009.
Sourcing
89. INCENTIVES
INTRINSIC VS. EXTRINSIC
People would prefer activities where they can pursue three things.
• Autonomy: People want to have control over their work.
• Mastery: People want to get better at what they do.
• Purpose: People want to be part of something that is bigger than they are
Intrinsic Motivations
• Enjoyment, desire to help out,
Extrinsic Motivations
• Money, praise, promotion, preferment, the admiration of peers (social
rewards), etc.
Intrinsic motivations are typically more powerful than extrinsic
ones, but the two classes have a strong interplay
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
89
SUGGESTED VIEW http://www.ted.com/talks/dan_pink_on_motivation.html
90. MONETARY INCENTIVE VS.
PERFORMANCE
• “Rational choice” in economic theory: Rational workers will choose to
improve their performance in response to a scheme that rewards such
improvements with financial gain
• Chocking effect
• [Herzberg1987] financial incentives undermine actual performance
e.g., hampering innovations
• [Horton2010; Farber2008; Fehr2007] Workers may ignore rational
incentives to work longer when they have accomplished pre-set targets
• [Lazear200] Autoglass factory, install windshields
• Switched from time-rate wage (pay per hour) to piece-rate wage (pay per unit)
brought a 20% increase in productivity
• Performance based pay scheme is a powerful tool for eliciting improved
performance => but at what risk?
• [Gneezy2000] [Heyman2004] Under certain circumstance the provision of
financial incentives can undermine “intrinsic motivation”
(e.g., enjoyment, altruism), possibly leading to poorer outcome
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
90
91. MONEY AND TROUBLE
• No expectation of financial reward
• effort motivated by other kinds of rewards
• e.g.
• social
• non-profit SamaSource contracts workers refugee
• Monetary compensation expected
• the anticipated financial value of the effort will be the driving
mechanism
• Careful: Paying a little often worse than paying nothing!
• Price commensurate with task effort
• Ex: $0.02 for yes/no answer
• Small pay now locks future pay
• $0.02 bonus for optional feedback
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
91
92. MONEY AND TROUBLE
• Payment replaces internal motivation (paying kids to collect
donations decreased enthusiasm)
• Lesson: Be the Tom Sawyer (“how I like painting the fence”),
not the scrooge-y boss…
• Paying a little:
• No interest or slow response
• Paying a lot:
• People focus on the reward and not on the task
• On MTurk spammers routinely attack highly-paying tasks
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
92
93. EXPERIMENT: WORD PUZZLE
[MASON2005]
• Want to further investigate payment incentives
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
93
* Shown a list of 15 possible words (not all of the
words listed are in the puzzle)
* Select a word: click the first and last letter
(if correct, it will turn red)
* Two wage models: quota vs. piece rate
* Quota: every puzzle successfully completed
* Piece: every word they found
* Pay levels: low, medium, high, (no pay)
-- Puzzle: $0.01, $0.05, $0.10
-- Word: $0.01, $0.02, $0.03
94. EXPERIMENT: WORD PUZZLE
[MASON2005]
• Payment incentives increase speed
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
94
95. EXPERIMENT: WORD PUZZLE
[MASON2005]
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
95
Accuracy
(fractionofworldsfoundperpuzzle)
CostperWord
No Contingent Pay Pay per Puzzle Pay per Word
Accuracy
Cost per word
High accuracy
per puzzle
means low cost
per word
Low accuracy per puzzle,
but workers find as many
words as they can
Intrinsic
motivation
(enjoyment)
96. INCENTIVES
SOCIALIZATION AND PRESTIGE
• Public credit contributes to sense of participation
• Credit also a form of reputation
• e.g. Leaderboards (“top participants”) frequent motivator [Farmer 2010]
• Newcomers should have hope of reaching top
• Should motivate correct behavior, not just measurable behavior
• Whatever is measured, workers will optimize for this
• Pro:
• “free”
• enjoyable for connecting with one another – can share infrastructure across
tasks
• Cons:
• need infrastructure beyond simple micro-task – need critical mass (for uptake
and reward
• social engineering more complex than monetary incentive
• Anonymity of MTurk-like settings discourage this factor
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
96
97. INCENTIVES
ALTRUISM
• Contributing back (tit for tat): Early reviewers writing reviews
because read other useful review
• Effect amplified in social networks: “If all my friends do it…” or
“Since all my friends will see this…”
• Contributing to shared goal
• Help Others Who need knowledge (e.g. Freebase
http://www.freebase.com/)
• Help workers (e.g. http://samasource.org/)
• Charity (e.g. http://freerice.com/)
• Pro
• “Free”
• Can motivate workers for a cause
• Cons
• Small workforce
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
97
98. INCENTIVES
PURPOSE OF WORK
• Contrafreeloading: Rats and animals prefer to “earn” their food
• Destroying work after production demotivates workers. [Ariely2008]
• Showing result of “completed task” improves satisfaction
• Workers enjoy learning new skills (often cited reason for Mturk
participation)
• Design tasks to be educational
• DuoLingo: Translate while learning new language [vonAhn, duolingo.com]
• Galaxy Zoo, Clickworkers: Classify astronomical objects [Raddick2010;
http://en.wikipedia.org/wiki/Clickworkers]
• On MTurk [Chandler2010]
• Americans [older, more leisure-driven] work harder for “meaningful work”
• Indians [more income-driven] were not affected
• Quality unchanged for both groups
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
98
99. INCENTIVES
FUN
• Gamify the task (design details later)
• Examples
• ESP Game: Given an image, type the same word
(generated image descriptions)
• Phylo: aligned color blocks (used for genome
alignment)
• FoldIt: fold structures to optimize energy (protein
folding)
• Fun factors [Malone, 1982, 1980]:
• timed response,
• score keeping,
• player skill level,
• highscore lists,
• and randomness
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
99
100. ADVERTISEMENT
• Your task needs to be found!
• Mechanical Turk UI is very primitive
• Users constantly refresh the web page to find most recent HITs
• Quality of description is paramount!
• Clear title, useful keywords
• Tricks needed in order to promote tasks
• Workers pick tasks that have large number of HITs or are recent
[Chilton2010]
• VizWiz optimizations [Bingham2011] :
• Posts HITs continuously (to be recent)
• Makes big HIT groups (to be large)
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
100
101. EFFECT OF #HITS: MONOTONIC, BUT
SUBLINEAR
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
101
• 10 HITs 2% slower than 1 HIT
• 100 HITs 19% slower than 1 HIT
• 1000 HITs 87% slower than 1 HIT
or, 1 group of 1000 7 times faster than 1000 sequential groups of 1
102. REPUTATION
MANAGEMENT
• Word of mouth effect
• Forums, alert systems
• Trust
• Pay on time?
• Fair rejections?
• Clear explanation if there is a rejection
• Opportunity
• Workers looks for good tasks (time vs. reward)
• – Experiments tend to go faster – Announce forthcoming
tasks (e.g. tweet)
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
102
103. OCCUPATIONAL HAZARDS
• Costs of requesters and admin errors are often borne by
workers
• Defective HITs, too short time to finish, etc.
• Worker’s rating can be affected due to such errors
• Staying safe online: phishing, scamming
• Some reports from Turker Nation: “Do not do any HITs that
involve: secret shopping, ….; they are scams”
• How to moderate such instances? (in a scalable way?)
• Employers who don’t pay
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
103
104. HELPING WORKERS?
• Augmenting M-Turk from the outside
• Few external Turking tools
• Building alternative human computation platforms?
• Offering workers legal protections (human rights)?
• Humans or machines?
• Legal responsibilities?
• Intellectual properties?
• Offering fair wage?
• Minimum wage? (or fair wage?)
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
104
105. ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
105
COMPLEX PROBLEMS
106. COMPLEX PROBLEMS
• Sometimes the problem at hand is too complex to be managed
by a single task
• Examples:
• Text transcription / summarization
• Open descriptions
• Dynamic planning
• Need for orchestration of several tasks
• Possibly performed with the help of humans
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
106
107. CONTROLS
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
107
True False
108. LOGICAL UNITS
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
108
Generate / Create
Find
Improve / Edit / Fix
Vote for accept-reject
Vote up, vote down, to generate rank
Vote for best / select top-k
Split task
Aggregate
Creation
Quality
Control
Flow Control
109. EXAMPLE: FREE-FORM ANSWERS
• Create-Vote pattern. Break task into two HITs
• “Create” HIT
• “Vote” HIT
• Vote HIT controls quality of Creation HIT
• Redundancy controls quality of Voting HIT
• Note: If “creation” very good, workers just vote “yes”
• Solution: Add some random noise (e.g. add typos)
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
109
Creation HIT
(e.g. find a URL about a topic)
Voting HIT:
Correct or not?
TurkIttoolkit[Littleetal.,UIST2010]:
http://groups.csail.mit.edu/uid/turkit/
110. EXAMPLE: FREE-FORM ANSWERS
• Create-Improve-Compare pattern. Break task into three HITs
• “Create” HIT
• “Improve” HIT
• “Compare” HIT
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
110
Creation HIT
(e.g. describe the image)
Improve HIT
(e.g. improve description)
Compare HIT (voting)
Which is better?
111. ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
111
version 1:
A parial view of a pocket calculator together with some
coins and a pen.
version 2:
A view of personal items a calculator, and some gold and copper coins, and a round tip pen,
these are all pocket
and wallet sized item used for business, writting, calculating prices or solving math problems
and purchasing items.
version 3:
A close-up photograph of the following items: A CASIO multi-function calculator. A ball point
pen, uncapped. Various coins, apparently European, both copper and gold. Seems to be a theme
illustration for a brochure or document cover treating finance, probably personal finance.
version 4:
…Various British coins; two of £1 value, three of 20p value and one of 1p value. …
version 8:
“A close-up photograph of the following items: A CASIO multi-function, solar
powered scientific calculator. A blue ball point pen with a blue rubber grip and the
tip extended. Six British coins; two of £1 value, three of 20p value and one of 1p
value. Seems to be a theme illustration for a brochure or document cover treating
finance - probably personal finance."
112. EXAMPLE: SOYLENT
• Word processor with crowd embedded [Bernstein2010]
• “Proofread paper”: Ask workers to proofread each paragraph
• Lazy Turker: Fixes the minimum possible (e.g., single typo)
• Eager Beaver: Fixes way beyond the necessary but adds
extra errors (e.g., inline suggestions on writing style)
• Find-Fix-Verify pattern
• Separate Find and Fix, does not allow Lazy Turker
• Separate Fix-Verify ensured quality
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
112
http://www.youtube.com/watch?v=n_miZqsPwsc
113. ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
113
Soylent, a prototype...
Independent agreement to identify patches
Randomize order of suggestions
114. ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
114
HYBRID PROBLEMS
115. HYBRID PROBLEMS
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
115
More people
Moremachines
THE BIGGER PICTURE
Machines using people
e.g., human computation
People using machines
e.g., collective action
Dave%de%Roure%
David De Roure
http://www.slideshare.net/davidderoure/social-machinesgss
116. KEY ISSUES
• The role of machine (i.e., algorithm) and humans
• use only humans?
• both?
• Who’s doing what?
• Quality control
• Optimization: What to crowdsource
• Scalability: How much to crowdsource
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
116
117. EXAMPLE
INTEGRATION WITH MACHINE LEARNING
• Crowdsourcing is cheap but not free
• Cannot scale to web without help
• We need to know when and how to use machines along with humans
• Solution
• Build automatic classification models using crowdsourced data
• Humans label training data
• Use training data to build model
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
117
118. TRADE-OFF FOR MACHINE LEARNING
MODELS
• Get more data
• Active Learning, select which unlabeled example to label
[Settles, http://active-learning.net/]
• Improve data quality
• Repeated Labeling, label again an already labeled
example [Sheng et al. 2008, Ipeirotis et al, 2010]
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
118
119. ITERATIVE TRAINING
• Use model when confident, humans otherwise
• Retrain with new human input => improve model => reduce
need for humans
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
119
120. HOW OFTEN WE CAN REFER TO THE
CROWD?
• Interaction protocol
• Upfront: Ask all the B queries at once
• Iterative: Ask K queries to the crowd and use them to
improve the system. Repeat this B/K times
All Human Intelligent Tasks (HIT) are NOT equally difficult for the
machine
• Measures used for selection
• Uncertainty: Asking hardest (most ambiguous) questions
• Explorer: Ask questions with potential to have largest impact
on the system
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
120
121. EXAMPLE
HYBRID IMAGE SEARCH
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
121
Yan, Kumar, Ganesan, CrowdSearch: Exploi?ng Crowds for Accurate Real-?me Image Search on Mobile Phones,
Mobisys 2010.
122. EXAMPLE
HYBRID DATA INTEGRATION
Generate Plausible Matches
• Paper = title, paper = author, paper = email, paper = venue
• Conf = title, conf = author, conf = email, conf = venue
Ask Users to Verify
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
122
Not sure
Schema Matching
paper conf
Data integration VLDB-01
Data mining SIGMOD-02
title author email
OLAP Mike mike@a
Social media Jane jane@b
Generate plausible matches & ask users to verify
– paper = title, paper = author, paper = email, paper = venue
– conf = title, conf = author, conf = email, conf = venue
paper conf
Data integration VLDB-01
Data mining SIGMOD-02
title author email venue
OLAP Mike mike@a ICDE-02
Social media Jane jane@b PODS-05
Does attribute paper match attribute author?
NoYes
McCann,Shen,Doan:MatchingSchemasin
OnlineCommunities.ICDE,2008
123. EXAMPLE
CROWDQ: CROWDSOURCED QUERY UNDERSTANDING
• Understand the meaning of a keyword query
• Build a structured (SPARQL) query template
• Answer the query
over Linked Open
Data
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
123
Gianluca Demartini, Beth Trushkowsky, Tim Kraska, and Michael Franklin.
CrowdQ: Crowdsourced Query Understanding. In: 6th Biennial Conference on
Innovative Data Systems Research (CIDR 2013)`
Indiana#Jones#–#Harrison#Ford#
Back#to#the#Future#–#Michael#J.#Fox#
Forrest#Gump#V#actors#
125. CROWD-SOURCING DB SYSTEMS
How can crowds help databases?
• Fix broken data
• Entity Resolution, inconsistencies
• Add missing data
• Subjective comparison
How can databases help crowd apps?
• Lazy data acquisition
• Game the workers market
• Semi-automatically create user interfaces
• Manage the data sourced from the crowd
Existing systems mainly academic
• CrowdDB (Berkley, ETH)
• Qurk (MIT)
• Scoop (Stanford)
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
125
127. CROWDDB
GOAL: crowd-source comparisons, missing data
• SQL with extensions to the DML and the Query Language
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
127
129. ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
129
130. UI EXAMPLES
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
130
ll out the missing
company data!
Submit
IBMName
Headquarter
address
lloutthemissing
professordata
Submit
Carey
E-Mail
Name
Department
ndaprofessor
llinherdata
Submit
E-Mail
Name
Department
ll out the missing
professor data
Submit
CS
Carey
Department
Name
Email
lloutthemissing
companydata!
Submit
IBMName
Headquarter
address
LargeSize
lloutthemissing
professordata
Submit
CS
Carey
Department
Name
Email
lloutthemissing
departmentdata
Name
Phone
Submit
add
131. CROWDDB
USER INTERFACE VS. QUALITY
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
131
Professor Department
name="Carey"
p.dep=d.name
lloutthemissing
professordata
Submit
Carey
E-Mail
Name
lloutthemissing
departmentdata
Submit
CS
Phone
Department
Name
MTJoin
(Dep)
p.dep=d.name
MTProbe
(Professor)
name=Carey
Department
lloutthemissing
professordata
Submit
CS
Carey
Department
name
Name
MTJoin
(Professor)
p.name="carey"
MTProbe(Dep)
E-Mail
lloutthemissing
professordata
Submit
Carey
E-Mail
Name
MTProbe
(Professor,Dep)
name=Carey
Department
Department
Phone
lloutthemissing
departmentdata
Submit
Phone
Department
Name
132. CROWDSEARCHER
• Given that crowds spend times on social networks…
• Why don’t use social networks and Q&A websites as additional
human computation platforms?
• Example:
search task
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
132
[Bozzon2012][Bozzon2013b]
Search Execution
Engine
HumanInteraction
Management
SE Access
Interface
Human
Access
Interface
Query Interface
Local
Source
Access
Interface
Social
Networks
Q&A
Crowd-
source
platforms
Query Answer
http://crowdsearcher.search-computing.org
133. SEARCH ON SOCIAL
NETWORKS
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
133
Embedded
application
Social/ Crowd platform
Native
behaviours
External
application
Standalone
application
API
Embedding
Community / Crowd
Generated query template
Native
138. MODEL
• Support for several types of task operations
• Like, Comment, Tag, Classify, Add, Modify, Order, etc.
• Several strategies for
• Task splitting: the input data collection is too complex relative to the
cognitive capabilities of users.
• Task structuring: the query is too complex or too critical to be
executed in one shot.
• Task routing: a query can be distributed according to the values of
some attribute of the collection
• Output aggregation
• Platform/community assignment
• a task can be assigned to different communities or social platforms
based on its focus
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
138
139. REACTIVE CONTROL
• Controlling crowdsourcing tasks is a fundamental issue
• Cost
• Time
• Quality
• A conceptual framework for modeling crowdsourcing
computations and control requirements
• Reactive Control Design
• Active Rule programming framework
• Declarative rule language
• A reactive execution environment for requirement enforcement
and reactive execution
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
139
140. RULE EXAMPLE
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
140
141. RULE EXAMPLE
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
141
142. RULE EXAMPLE
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
142
143. RULE EXAMPLE
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
143
144. RULE EXAMPLE
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
144
145. RULE EXAMPLE
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
145
146. WORKFLOWS WITH MECHANICAL
TURK
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
146
HIT
HIT
HIT
HIT
HIT
HIT
Data
Collected in
CSV File
Requester posts HIT
Groups to Mechanical
Turk
Data Exported for
Use
147. CROWDFORGE
Map-Reduce framework for crowds
[Kittur et al, CHI 2011]
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
147
My Boss is a Robot (mybossisarobot.com), Nikki Kittur (CMU) + Jim Giles (New Scientist)
• Easy to run simple,
parallelized tasks.
• Not so easy to run
tasks in which turkers
improve on or validate
each others’ work.
148. CROWDWEAVER
[Kittur et al. 2012]
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
148
CrowdWeaver: Visually Managing Complex Crowd Wor
Aniket Kittur, Susheel Khamkar, Paul André, Robert E. Kraut
Carnegie Mellon University
5000 Forbes Avenue, Pittsburgh, PA, 15213
{nkittur, pandre, kraut}@cs.cmu.edu, susheelkhamkar2@gmail.com
gure 1. The CrowdWeaver workflow management interface. (A) The workflow consisting of human tasks ( ), e.g., “create (n
eads)”, and machine tasks (e.g., divide, permute). (B) The Task Summary pane details the selected task, with the “news lead” fiel
149. TURKOMATIC
• Crowd creates workflows [Kalkani et
al, CHI 2011]:
• Turkomatic interface accepts task
requests written in natural
language
• Ask workers to decompose task into
steps (Map)
• Can step be completed within 10
minutes?
• Yes: solve it.
• No: decompose further
(recursion)
• Given all partial solutions, solve big
problem (Reduce)
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
149
151. EVALUATION
• Tasks:
• Producing a written essay in response to a prompt: “please
write a five-paragraph essay on the topic of your choice”
• Solving an example SAT test “Please solve the 16-
question SAT located at http://bit.ly/SATexam”
• Payment: $0.10 to $0.40 per HIT
• Each “subdivide” or “merge” HIT received answers within 4
hours; solutions to the initial task were completed within 72
hours
• Essay: the final essay (about “university legacy admissions”)
displayed a reasonably good understanding of a topic; yet the
writing quality is often mixed
• SAT: the task was divided into 12 subtasks (containing 1-3
questions); the score was 12/17
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
151
152. TURKIT
Human Computation Algorithms on
Mechanical Turk [Little2010]
• Arrows indicate the flow of information.
• Programmer writes
2 sets of source code:
• HTML files for web servers
• JavaScript executed by TurKit
• Output is retrieved via a JavaScript
database.
• TurKit: Java using Rhino to interpret
JavaScript code, and E4X2 to handle XML
results from MTurk
• IDE: Google App Engine3 (GAE)
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
152
153. CRASH-AND-RERUN
PROGRAMMING MODEL
• Observation: local computation is cheap, but the external class
cost money
• Managing states over a long running program is challenging
• Examples: Computer restarts? Errors?
• Solution: store states in the database (in case)
• If an error happens, just crash the program and re-run by
following the history in DB
• Throw a “crash” exception; the script is automatically re-run.
• New keyword “once”:
• Remove non-determinism
• Don’t need to re-execute an expensive operation (when re-
run)
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
153
154. EXAMPLE: QUICK SORT
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
154
155. CROWD-POWERED SEARCH
• Users ask questions on Twitter
• An hybrid system provide
answers
• Workers used for
• label tweets as “rhetorical
question” or not
• Median 3.02 mins
• produce responses to
question
• Median 77.4 mins
• Voting responses
• Median 82.1 mis
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
155
A Crowd-Powered Socially Embedded Search Engine. Jin-Woo Jeong, Meredith
Ringel Morris, Jaime Teevan, Daniel Liebling. ICWSM 2013
Median time =>162.5 minutes
Cost => $0.95 per tweet
157. WHAT LIES AHEAD OF US?
What would it take for us to be proud of our children growing up
to be crowd workers*?
*any work that could be sent down a wire
• Ethics, entangled with methods and tools
• Should workers be treated as undifferentiated and
discardable?
• Should requesters be viewed as distant and wielding
incredible power to deny payment or harm reputations?
• Work is complex, creative, and interdependent
• Could a crowd compose a symphony?
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
157
A.Kittur et al. The future of crowd work. CSCW '13.
ACM, New York, NY, USA, 1301-1318.
159. IMPROVE
WORKER EXPERIENCE
• Reputation system for workers
• More than financial incentives
• Education? Recognition? Status?
• Recognize worker potential (badges)
• Paid for their expertise
• Steering User Behavior with Badges [WWW2013]
• Train less skilled workers (tutoring system)
• Can we facilitate this process and deliver work suited to
the person’s expertise, all the way along that process?
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
159
160. IMPROVE WORK
• Promote workers to management roles
• Create gold labels
• Manage other workers
• Make task design suggestions (first-pass validation)
• Career trajectory (based on reputation):
1. Untrusted worker
2. Trusted worker
3. Hourly contractor
4. Employee
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
160
161. IMPROVE WORK
TASK RECOMMENDATION
• Content-based recommendation
• Find similarities between worker profile and task
characteristics
• Collaborative Filtering
• Make use of preference information about tasks (e.g.
ratings) to infer similarities between workers
• Hybrid
• A mix of both
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
161
162. IMPROVE PLATFORMS
• What is a platform?
• Know your crowd: Model workers
• Work-flows
• Enforce Quality
• Ubiquitous crowdsourcing
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
162
163. HOW TO BUILD SOCIAL SYSTEMS
AT SCALE?
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
163
More people
Moremachines
OPEN QUESTION: HOW TO BUILD
SOCIAL SYSTEMS AT SCALE?
Big Data
Big Compute
Conventional
Computation
The Future!
Social
Networking
e-infrastructure
online
R&D
Dave%de%Roure%
Social
Machines!!
David Roure
http://www.slideshare.net/davidderoure/social-machinesgss
165. PAPERS
• [Grier2005] When Computers Were Human”
• [Turing1950] http://www.loebner.net/Prizef/TuringArticle.html
• [VonAhn2005] A.M. Turing. Computing Machinery and Intelligence. http://reports-
archive.adm.cs.cmu.edu/anon/2005/abstracts/05-193.html
• [Mason2009] Winter Mason and Duncan J. Watts. 2009. Financial incentives and the
"performance of crowds". In Proceedings of the ACM SIGKDD Workshop on Human
Computation (HCOMP '09), ACM, New York, NY, USA, 77-85.
• [Lazear200] Lazear, E. P. Performance pay and productivity. American Economic Review,
90, 5 (Dec 2000), 1346-1361.
• [Gneezy2000]Gneezy, U. and Rustichini, A. Pay enough or don't pay at all. Q. J. Econ.,
115, 3 2000), 791-810.
[Heyman2004] Heyman, J. and Ariely, D. Effort for Payment: A Tale of Two Markets.
Psychological Science, 15, 11 2004), 787-793.
• [Herzberg1987] Herzberg, F. One More Time: How do You Motivate Employees? Harvard
Business ReviewSeptember-October, 1987), 5-16.
• [Kittur2013] Aniket Kittur, Jeffrey V. Nickerson, Michael Bernstein, Elizabeth Gerber, Aaron
Shaw, John Zimmerman, Matt Lease, and John Horton. 2013. The future of crowd work. In
Proceedings of the 2013 conference on Computer supported cooperative work (CSCW
'13). ACM, New York, NY, USA, 1301-1318.
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
165
166. PAPERS
• [Farber2008] Farber. Reference-dependent preferences and labor supply: The case of
New York City taxi drivers. American Economic Review, 2008.
• [Fehr2007] Fehr and Goette. Do workers work more if wages are high?: Evidence from a
randomized field experiment. American Economic Review, 2007
• [Chandler2010] Chandler and Kepelner, Breaking Monotony with Meaning: Motivation in
Crowdsourcing Markets , 2010
• [Farmer2010] Farmer and Glass, Building Web Reputation Systems, O’Reilly 2010
• [Horton2010] Horton and Chilton: The labor economics of paid crowdsourcing. EC 2010
• [Quinn2011] Alexander J. Quinn and Benjamin B. Bederson. 2011. Human computation: a
survey and taxonomy of a growing field. In Proceedings of the SIGCHI Conference on
Human Factors in Computing Systems (CHI '11). ACM, New York, NY, USA, 1403-1412.
• [Snow2008] Snow, Rion and O'Connor, Brendan and Jurafsky, Daniel and Ng, Andrew.
Cheap and Fast -- But is it Good? Evaluating Non-Expert Annotations for Natural
Language Tasks, Proceedings of the 2008 Conference on Empirical Methods in Natural
Language Processing, October 2008, Honolulu, Hawaii.
• [Chi2006] M.Chi. Two approaches to the study of experts’ characteristics. In
K.A.Ericsson,N.Charness, P. J. Feltovich, and R. R. Hoffman, editors, The Cambridge
handbook of expertise and expert performance, pages 21–30. Cambridge University
Press, 2006. Cited on page(s) 35
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
166
167. PAPERS
• [Kittur2012] Aniket Kittur, Susheel Khamkar, Paul André, and Robert Kraut.
2012. CrowdWeaver: visually managing complex crowd work. In Proceedings of
the ACM 2012 conference on Computer Supported Cooperative Work (CSCW
'12). ACM, New York, NY, USA, 1033-1036.
• [ImageNet] http://www.image-net.org/about-publication
• Mason and Watts, Financial Incentives and the “Performance of Crowds”,
HCOMP 2009
• Yan, Kumar, Ganesan, “CrowdSearch: Exploiting Crowds for Accurate Real-time
Image Search on Mobile Phones”, MobiSys 2010
• Ipeirotis, Analyzing the Mechanical Turk Marketplace, XRDS 2010
• Wang, Faridani, Ipeirotis, Estimating Completion Time for Crowdsourced Tasks
Using Survival Analysis Models. CSDM 2010
• Chilton et al, Task search in a human computation market, HCOMP 2010
• Bingham et al, VizWiz: nearly real-time answers to visual questions, UIST 2011
• Horton and Chilton: The labor economics of paid crowdsourcing. EC 2010
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
167
168. PAPERS
• Huang et al., Toward Automatic Task Design: A Progress Report, HCOMP 2010
• Quinn, Bederson, Yeh, Lin.: CrowdFlow: Integrating Machine Learning with
Mechanical Turk for Speed-Cost-Quality Flexibility
• Parameswaran et al.: Human-assisted Graph Search: It's Okay to Ask Questions,
VLDB 2011
• Mitzenmacher, An introduction to human-guided search, XRDS 2010
• Marcus et al, Crowdsourced Databases: Query Processing with People, CIDR
2011
• Raykar, Yu, Zhao, Valadez, Florin, Bogoni, and Moy. Learning from crowds. JMLR
2010.
• Mason and Watts, Collective problem solving in networks, 2011
• Dellarocas, Dini and Spagnolo. Designing Reputation Mechanisms. Chapter 18
in Handbook of Procurement, Cambridge University Press, 2007
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
168
169. TUTORIALS
• Ipeirotis (WWW2011)
• http://www.slideshare.net/ipeirotis/managing-crowdsourced-human-computation
• Omar Alonso, Matthew Lease (SIGIR 2011)
• http://www.slideshare.net/mattlease/crowdsourcing-for-information-retrieval-
principles-methods-and-application
• Omar Alonso, Matthew Lease (WSDM 2011)
• http://ir.ischool.utexas.edu/wsdm2011_tutorial.pdf
• Gianluca Demartini, Elena Simperl, Maribel Acosta
(ESWC2013)
• https://sites.google.com/site/crowdsourcingtutorial/
• Bob Carpenter and Massimo Poesio
• http://lingpipe-blog.com/2010/05/17/lrec-2010-tutorial-modeling-data-annotation/
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
169
170. TUTORIALS
• Bob Carpenter and Massimo Poesio
• http://lingpipe-blog.com/2010/05/17/lrec-2010-tutorial-modeling-data-annotation/
• Omar Alonso
• http://wwwcsif.cs.ucdavis.edu/~alonsoom/crowdsourcing.html
• Alex Sorokin and Fei-Fei Li
• http://sites.google.com/site/turkforvision/
• Daniel Rose
• http://videolectures.net/cikm08_rose_cfre/
• A. Doan, M. J. Franklin, D. Kossmann, T. Kraska (VLDB 2011)
• Crowdsourcing Applications and Platforms: A Data
Management Perspective.
• List by Matt Lease http://ir.ischool.utexas.edu/crowd/
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
170
171. BLOGS AND ONLINE
RESOURCES
• [B_Tibbets2011] http://soa.sys-con.com/node/1996041
• [B_Wired2006]
http://www.wired.com/wired/archive/14.06/crowds.html
• [B_NextWeb2013]
http://thenextweb.com/shareables/2013/01/16/verizon-finds-
developer-outsourced-his-work-to-china-so-he-could-surf-
reddit-and-watch-cat-videos/
• [B_Chronicle2009]
http://chronicle.com/blogs/wiredcampus/duke-professor-uses-
crowdsourcing-to-grade/7538
• [B_DemoTurk] http://behind-the-enemy-
lines.blogspot.com/2010/03/new-demographics-of-mechanical-
turk.html
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
171
172. BOOKS, COURSES, AND
SURVEYS
• E. Law and L. von Ahn. Human Computation. Morgan &
Claypool Synthesis Lectures on Artificial Intelligence and
Machine Learning, 2011
• http://www.morganclaypool.com/toc/aim/1/1
• S.Ceri, A.Bozzon, M.Brambilla, P.Fraternali, S.Quarteroni. Web
Information Retrieval. Springer.
• Omar Alonso, Gabriella Kazai, and Stefano
Mizzaro. Crowdsourcing for Search Engine Evaluation: Why
and How.
• To be published by Springer, 2011.
• Deepak Ganesan. CS691CS: Crowdsourcing - Opportunities
& Challenges (Fall 2010). UMass Amherst
• http://www.cs.umass.edu/~dganesan/courses/fall10
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
172
Credits: Matt Lease
http://ir.ischool.utexas.edu/crowd/
173. BOOKS, COURSES,
AND SURVEYS
• Matt Lease. CS395T/INF385T: Crowdsourcing:
Theory, Methods, and Applications (Spring 2011). UT Austin.
• http://courses.ischool.utexas.edu/Lease_Matt/2011/Spring/IN
F385T
• Yuen, Chen, King: A Survey of Human Computation
Systems, SCA 2009
• Quinn, Bederson: A Taxonomy of Distributed Human
Computation, CHI 2011
• Doan, Ramakrishnan, Halevy: Crowdsourcing Systems on
the World-Wide Web, CACM 2011
• Uichin Lee
• http://mslab.kaist.ac.kr/twiki/bin/view/CrowdSourcing/
• ̂me Waldispühl, McGill University
• http://www.cs.mcgill.ca/~jeromew/comp766/
ICWE 2013 - An Introduction To Human Computation and Games With a Purpose
173
Notas del editor
Of course this “cheap labor” thing is not something to be proud about…. We will see it later
[Blinder 2006, Horton 2013] ???And in fact, Blinder argues that about 20% of current American jobs could be sent down a wire. These include tasks like programming, accounting, marketing, and even machine operators. Recent evidence for crowd work in particular suggests that its volume will be roughly 454 billion dollars per year. That’s 91 billion hours per year, employing about 45 million fulltime workers. What might this mean? Think of current workers having the ability to become fulltime contractors, able to control their jobs and their career as they desire. On the other end of the spectrum, we get far more flexibility in time, like a stay-at-home dad who uses his skills while the baby is sleeping.
Alexis Claude de Clairaut found a model that could be solved numerically He recruited two friends for 9 months, divide the calculations of the orbit, mathematically tracing the cometHuman computers soon discovered the benefits of dividing the task and specializing their skills. Adam Smith [1723–1790] the division of labor produce “greatest improvement in productive powers of labor.” Human computers could reduce the cost of computation by either increasing the speed of calculation or by reducing errors in calculationTraditionally, hierarchical control (good in military) =>More visionary: mechanical controlCharles Babbage [1792-1871]'s Difference EngineThe Difference Engine was invented because Babbage was frustrated by limitations of (human) computers.A machine combining the additions and subtractions in order to interpolate a functionFirst World War required large numbers of human computers (map grids, surveying aids, navigation tables and artillery tables)Most computers were women and many were college educatedGreat Depression and Second World War: WPA (Works Progress Administration) Mathematical Tables Project Requirement: use labor-intensive methods in order to employ the greatest number of workersMost of the computers knew little about arithmeticDeveloped ways of organizing the group and devised mathematical methods that were self-checkingWorkers gladly did the hard labor of research calculation in the hope that they might be part of the scientific community. In the end, they were rewarded by a new electronic machine that took the place and the name of those who were, once, the computers.
http://en.wikipedia.org/wiki/Frederick_Winslow_Taylor#Managers_and_workersIt is only through enforced standardization of methods, enforced adoption of the best implements and working conditions, and enforced cooperation that this faster work can be assured. And the duty of enforcing the adoption of standards and enforcing this cooperation rests with management alone.[9]Workers were supposed to be incapable of understanding what they were doing. According to Taylor this was true even for rather simple tasks.
Thisis a famous game, called “Where’s Wally”. Identify Wally withinthis image, givenit’sdescriptionprovided by its image
It does not take into account the fact that workers can make random guesses or make mistakes and still agree by chance [13]. This is especially problematic if the majority of the workers are novices (who systematically make the same kinds of errors) or spammers (who generate answers at random). Additionally, many of the factors that influence the outcome of the computation are not captured by the simple model. First, each worker may have different biases. F
Initialize by aggregating labels for each object (e.g., use majority vote)Estimate error rates for workers (using aggregate labels)Estimate aggregate labels (using error rates, weight worker votes according to quality)Keep labels for “gold data” unchangedGo to Step 2 and iterate until convergence
Not that simple: how do you build the data?
Mention active learning
Writing a news story• Programming software • Composing a symphony
the first challenge is to decompose the task:if you were planning a conference, you might split up finding a venue from reviewing papersif you were google, you might split up different parts of the web for different machines to process need to assemble the right teams of people:if you were conference chair you need to find respectable academics or coercable friends to be on the committee if you were google: assign different machines play different roles, like a master node coordinating a mapreduce process finally, you have to execute workflows, which may have multiple stages and decision processesfor a conference we have multistage review processesin computing, we have algorithms: for example, the output of one mapreduce process may get passed to another
badges can influence and steer user behavior on a site—leading both to increased participation and to changes in the mix of activities a user pursues on the site.