The document describes a study that tested the effects of implicit sharing of notes in collaborative sensemaking. 64 participants were divided into 34 teams and assigned to one of two conditions: a condition with no implicit sharing where each person only saw their own notes, and a condition with implicit sharing where notes were automatically shared between partners. Both conditions allowed for explicit sharing via chat. The teams completed a practice session and a 60 minute task to identify the name of a serial killer from case documents. Performance, workload, and information exchanged were measured and compared between conditions to understand the impact of implicit note sharing.
3. 3
Introduc+on What is sensemaking ?
Mo+va+on | Contribu+on| Hypothesis | Experiment | Results | Take Away
3
Organiza(on Science (Weick, 1995)
Educa(on & Leaning Science (Schoenfeld, 1992)
Intelligent Systems (Jacobson, 1991; Savolainen, 1993)
Informa(on Systems (Griffith, 1999)
Communica(ons (Dervin et al., 2003)
4. 4
Introduc+on What is sensemaking ?
Mo+va+on | Contribu+on| Hypothesis | Experiment | Results | Take Away
4
Sensemaking is the process of searching for a representa+on and encoding data in a
representa+on to answer task-specific ques+ons.
- The cost structure of sensemaking. INTERCHI '93 Russell, D. M., Stefik, M. J., Pirolli, P., & Card, S. K.
5. 5
Introduc+on What is sensemaking ?
Mo+va+on | Contribu+on| Hypothesis | Experiment | Results | Take Away
5
Sensemaking is the process of searching for a representa+on and encoding data in a
representa+on to answer task-specific ques+ons.
- The cost structure of sensemaking. INTERCHI '93 Russell, D. M., Stefik, M. J., Pirolli, P., & Card, S. K.
6. 6
Introduc+on What is sensemaking ?
Mo+va+on | Contribu+on| Hypothesis | Experiment | Results | Take Away
6
Sensemakers change representa+ons either to reduce the +me taken to perform the
task or to improve a cost vs. quality tradeoff.
- The cost structure of sensemaking. INTERCHI '93 Russell, D. M., Stefik, M. J., Pirolli, P., & Card, S. K.
8. 8
Introduc+on What is collaborative sensemaking ?
Mo+va+on | Contribu+on| Hypothesis | Experiment | Results | Take Away
8
Collabora+ve sensemaking extends beyond the crea+on of individual understandings
of informa+on to the crea+on of a shared understanding of informa+on from the
interac+on between individuals.
12. 12
Introduc+on Why research collaborative sensemaking ?
Mo+va+on | Contribu+on| Hypothesis | Experiment | Results | Take Away
Federal Agencies knew of the impending aaack but did not communicate with each
other, failing to connect the dots. /Collabora+on Failure
-9/11 Inves+ga+on Report
22. 22
Introduc+on Expectations from Implicit sharing
Mo+va+on | Contribu+on| Hypothesis | Experiment | Results | Take Away
H1. Implicit sharing of notes will improve Task Performance
H2a. Implicit Sharing of notes will be rated as more useful than non-
implicit sharing.
H2b. Implicit Sharing of notes will result in increased usage of
collabora(ve features than non-implicit sharing.
24. 24
Introduc+on
SAVANT Prototype
pile anyone’s Stickies. Mouse cursors are independent of
each other, while dependencies between Stickies are han-
dled by the server on a first-come-first-serve basis. The
server updates the interface every second.
We created two versions of SAVANT for this study. In the
implicit sharing condition, Stickies in the Analysis Space
are automatically shared as described above: there is no
private workspace for analysis, only a public one. In the no
implicit sharing condition, partners only see their own
Stickies in the Analysis Space: there is no public work-
space, only private ones for each analyst. The chat box is
available in both conditions to support explicit sharing.
cold (unresolved) cases, and one current (active) case. Each
of the cold cases included a single document with a sum-
mary of the crime: victim, time, method, and witness inter-
views. Four of these six cold cases were “serial killer” cas-
es. These four had a similar crime pattern (e.g., killed by a
blunt instrument). The active case consisted of nine docu-
ments: a cover sheet, coroner’s report, and witness and sus-
pect interviews. Additional documents included three bus
route timetables and a police department organization chart.
The documents were available through the SAVANT doc-
ument library and were split between the two participants
such that each had access to 3 cold cases (2 serial killer
Figure 2. The Analysis Space showing Stickies that are implicitly shared between analysts (color-coded by user), connections between
Stickies via arrows, and piles of multiple Stickies. Explicit sharing is supported via the chat box at the bottom left.
information, thereby reducing their workload. On the other
hand, shared workspaces might increase communication
costs [19]. Seeing partners’ activity might divert attention
from one’s own thoughts and increase the need for explicit
discussion of process and data, especially when shared in-
sights are connected to unshared data [13]. Since the direc-
tion of impact is unclear, we pose two research questions:
RQ1. How will implicit sharing of notes affect participants’
cognitive workload?
RQ2: How will the availability of implicit sharing affect the
amount of information exchanged via explicit channels?
pane are for viewing and reading crime case reports, wit-
ness reports, testimonials, and other documents. A network
diagram visualizes connections between documents based
on commonly identified entities like persons, locations, and
weapon types. The Document Space also provides a map of
the area where crimes and events were reported and a time-
line to assist in tracking events over time. Users can high-
light and create annotations in the text of documents, loca-
tions on the map, and events in the timeline.
Such annotations automatically appear in the Analysis
Space, an area for analysts to iteratively make and reorgan-
ize their notes until they see emerging patterns that lead to
Figure 1. The Document Space showing (clockwise, from top-left) the directory of crime case documents, a tabbed reader pane for
reading case documents, a visual graph of connections based on common entities in the dataset, a map to identify locations of crimes
and events, and a timeline to track events.
Mo+va+on | Contribu+on| Hypothesis | Experiment | Results | Take Away
Document Space Analysis Space
43. 43
Introduc+on
H1. Task Performance
sharing was available than when it was
test this hypothesis, we conducted mixed
using clue recall and clue recognition a
measures. In these models, participant n
a b
Figure 3. (a) Task performance, (b) Perceive
Clue Recall, p = 0.01
Clue Recogni(on, p = 0.06
No Significant difference in
Serial Killer Iden(fica(on
information sharing.
Task Performance
H1 proposed that pairs would perform better when implicit
sharing was available than when it was not available. To
test this hypothesis, we conducted mixed model ANOVAs,
using clue recall and clue recognition as our dependent
measures. In these models, participant nested within pair
the implicit sharing cond
no implicit sharing condit
Square [1, 68]=0.57, p=0.
manually did not improve
ing knowledge implicitly
answer accuracy in [19].
Perception of Usefulnes
a b c
Figure 3. (a) Task performance, (b) Perceived usefulness of Stickies and Analysis Space, and (c)
Mo+va+on | Contribu+on| Hypothesis | Experiment | Results | Take Away
44. 44
Introduc+on
H2. User Experience
H1 proposed that pairs would perform better when implicit
sharing was available than when it was not available. To
test this hypothesis, we conducted mixed model ANOVAs,
using clue recall and clue recognition as our dependent
measures. In these models, participant nested within pair
manually did not impro
ing knowledge implicit
answer accuracy in [19
Perception of Usefuln
a b c
Figure 3. (a) Task performance, (b) Perceived usefulness of Stickies and Analysis Space, and
S(cky U(lity, p < 0.001
Analysis Space u(lity, p <
0.001
Combina(on of Channels
rated Higher
information sharing.
Task Performance
H1 proposed that pairs would perform better when implicit
sharing was available than when it was not available. To
test this hypothesis, we conducted mixed model ANOVAs,
using clue recall and clue recognition as our dependent
measures. In these models, participant nested within pair
no implicit sharing conditio
Square [1, 68]=0.57, p=0.45
manually did not improve an
ing knowledge implicitly in
answer accuracy in [19].
Perception of Usefulness o
a b c
Figure 3. (a) Task performance, (b) Perceived usefulness of Stickies and Analysis Space, and (c) Nu
made in a session, each by interface condition. Error bars represent standard erro
Mo+va+on | Contribu+on| Hypothesis | Experiment | Results | Take Away
45. 45
Introduc+on
H2. User Experience
sharing was available than when it was not available. To
test this hypothesis, we conducted mixed model ANOVAs,
using clue recall and clue recognition as our dependent
measures. In these models, participant nested within pair
manually did not impro
ing knowledge implicit
answer accuracy in [19
Perception of Usefuln
a b c
Figure 3. (a) Task performance, (b) Perceived usefulness of Stickies and Analysis Space, and
made in a session, each by interface condition. Error bars represent standard
~2 x Connec(ons
~3 x Piles
~2 x Manipula(ons
Visibility increased adop(on
as not available. To
ed model ANOVAs,
n as our dependent
t nested within pair
manually did not improve answer accuracy in [13] but shar-
ing knowledge implicitly in a small experiment did increase
answer accuracy in [19].
Perception of Usefulness of SAVANT features
b c
eived usefulness of Stickies and Analysis Space, and (c) Number of connections and piles
information sharing.
Task Performance
H1 proposed that pairs would perform better when implicit
sharing was available than when it was not available. To
test this hypothesis, we conducted mixed model ANOVAs,
using clue recall and clue recognition as our dependent
measures. In these models, participant nested within pair
the implicit sharing condi
no implicit sharing condit
Square [1, 68]=0.57, p=0.
manually did not improve
ing knowledge implicitly i
answer accuracy in [19].
Perception of Usefulness
a b c
Figure 3. (a) Task performance, (b) Perceived usefulness of Stickies and Analysis Space, and (c)
made in a session, each by interface condition. Error bars represent standard er
Mo+va+on | Contribu+on| Hypothesis | Experiment | Results | Take Away
52. 52
Introduc+on Implicit Sharing & Explicit Sharing
“The chat was easily the most helpful because it allowed us to
communicate and tell each other specifics about the case. The
S+ckies were very useful also because they allowed us to make
connec+ons between the informa+on we both had independent
of talking with each other. [S(ckies] allowed us to work more
efficiently than was(ng both of our (me.”
(P8, Male)
Mo+va+on | Contribu+on| Hypothesis | Experiment | Results | Take Away
53. 53
Introduc+on Implicit Sharing & Explicit Sharing
“I used the S(ckies as jumping off points for conversa(ons with
my partner - I would see her S+cky and then ask her to fill in
some details that she may have skipped over since she had
access to certain documents that I did not.”
(P15, Female)
Mo+va+on | Contribu+on| Hypothesis | Experiment | Results | Take Away
54. 54
Introduc+on Stickies as Visual Metaphor
“The S+ckies enabled a connec+on between my partner and I,
we could see each other’s train of thoughts and methods of
organiza+on. I used the connec(ng lines for the S(ckies to
show myself and my partner the connec(ons that I was
seeing.”
(P27, Female)
Mo+va+on | Contribu+on| Hypothesis | Experiment | Results | Take Away
57. 57
Introduc+on Inter-Organization Sharing is Tricky
Mo+va+on | Contribu+on| Hypothesis | Experiment | Results | Take Away
UX Research Contribution
57 57
Instead of Tes+ng one whole Black Box, one should test each feature separately
- Effects of Visualiza+on & note-taking on Sensemaking, Goyal, N., Leshed, G., Fussell, S.R.
ACM CHI 2013
58. 58
Introduc+on Inter-Organization Sharing is Tricky
Mo+va+on | Contribu+on| Hypothesis | Experiment | Results | Take Away
Sensemaking Contribution
58 58
NLP + User generated Insights in cloud => learn connec+ons + create recommenda+on
- Effects of Implicit Sharing on Collabora+ve Sensemaking, Goyal N., Leshed, G., Cosley, D., Fussell, S.R.,
ACM CHI 2014
59. 59
Introduc+on Inter-Organization Sharing is Tricky
Mo+va+on | Contribu+on| Hypothesis | Experiment | Results | Take Away
Theoretical Contribution
59 59 59
But beware of automated sharing and recommenda+ons. Tunnel Vision*
- Weick, K. 1995. *Sensemaking in Organisa+ons. London: Sage; Effects of Implicit Sharing on
Collabora+ve Sensemaking, Goyal N., Leshed, G., Cosley, D., Fussell, S.R., ACM CHI 2014
Hi! My name is Nitesh Goyal. I’d like to present our work on effects of implicit sharing in collaborative analysis.
In this work, we wanted to do two things
Design an interface to support implicit sharing of notes for distributed collaborative analysis
Empirically test implicit sharing in our interface with an experiment using crime-data where participants would act as analysts.
This work was done in collaboration with Gilly leshed, Dan Cosley, and Susan Fussell at Cornell University.
Hi! My name is Nitesh Goyal. I’d like to present our work on effects of implicit sharing in collaborative analysis.
In this work, we wanted to do two things
Design an interface to support implicit sharing of notes for distributed collaborative analysis
Empirically test implicit sharing in our interface with an experiment using crime-data where participants would act as analysts.
This work was done in collaboration with Gilly leshed, Dan Cosley, and Susan Fussell at Cornell University.
The term ‘sensemaking’ has been used in various disciplines such as organizational science (Weick, 1995), education and learning sciences (Schoenfeld, 1992), communications (Dervin et al., 2003), intelligent systems (Jacobson, 1991; Savolainen, 1993), and information systems (Griffith, 1999). The common thread in the various definitions of sensemaking is that sensemaking is about meaning generation and understanding.
In the field of HCI, sensemaking has focused on how users understand large, complex information spaces or large document collections (Russell et al., 1993). When interacting with large amounts of information, people create representations such as maps, diagrams, and tables to organize information in order to make sense of it. Therefore, sensemaking is the cyclic process of encoding information into external representations to answer complex, task- specific questions. They show that sensemakers change representations either to reduce the time taken to perform the task or to improve a cost vs. quality tradeoff.
In the field of HCI, sensemaking has focused on how users understand large, complex information spaces or large document collections (Russell et al., 1993). When interacting with large amounts of information, people create representations such as maps, diagrams, and tables to organize information in order to make sense of it. Therefore, sensemaking is the cyclic process of encoding information into external representations to answer complex, task- specific questions. They show that sensemakers change representations either to reduce the time taken to perform the task or to improve a cost vs. quality tradeoff.
The prior research shows complex tools that have been tested in their entirety for their usability. It is hard to know how different parts of such tools interact with each other. So, that is our first challenge.
Collaborative Analysis is a complex problem where one has to iteratively forage and make sense of data while considering multiple solutions and it can be critical.
Sometimes it can work out. Other times not – for example, during Boston Marathon Bombing, enthusiasts got online on reddit and collaboratively tried to solve the case. They ended up falsely accusing two persons.
Instead of finding the globally correct solution where a culprit satisfied multiple parameters, they ended up focusing on a locally correct solution where the falsely accused satisfied only a few parameters and so, failed. As designers, our goal should be to encourage users to find the globally correct solution.
One way of doing this is to help the users leverage each others’ insights to help find the globally correct solution.
Now this comes up again and again, especially in crime and intelligence analysis – for example during 9/11 – the agencies failed to share and leverage each others insights to prevent 9/11.
Going back to Boston Marathon Bombing, the ideal course of actions would have been the immigration officer who let one of the accused fly out of US letting know the agent in Moscow about the accused’s movements.
Unfortunately, reality is far from ideal. No documented trail of explicit information sharing was found. The immigration officer did make notes about the accused but these notes are now believed to not have been shared.
In summary, most of the previous work has either not been studied in controlled settings or has looked at explicit sharing - but we wanted to know how implicit sharing would be a game changer ?
When no implicit sharing is available, one must forage and analyze to get insights into the data that are documented.
And then one must overcome all the challenges like organizational policies, lack of incentives, and nervousness to share all these insights manually : all the insights, at the right time.
So it requires a cognitive step to overcome challenges before sharing.
In implicit sharing, we wanted to remove this cognitive attention requirement.
So, In implicit sharing, one must still forage and analyze to get insights into the data that are documented.
But sharing is automatically done by the system without an explicit move by the human. Now logically speaking: given such a system which one would perform better at solving a problem together?
When shared implicitly, we believe that implicitly information will help collaborating partners in a team perform better.
When shared implicitly, we believe that implicitly information will help collaborating partners in a team perform better.
So to find answer to our hypothesis and research questions, we created SAVANT, a collaborative analysis prototype tool based on existing research and interviews done with analysts. SAVANT consists of two spaces: Document Space on left and Analysis Space on Right.
More importantly, It enables one to highlight and make insights that are then sent to the Analysis Space automatically .
All these insights go as sticky notes into an Analysis Space which may be seen by your collaborator. You can also chat with your collaborator as shown in the bottom left
These Sticky Notes may be connected with each other or piled one on top of the other. You can also view and manipulate partner’s sticky notes shown in an alternative color aka you can work with your partner’s stickies but can not edit or delete them.
We afforded implicit sharing by enabling users to view each other’s sticky notes which appear automatically as generated. Implicit sharing could be removed by not sharing collaborator’s Sticky notes. Participants were able to chat with each other and manually share information.
We created a distributed collaborative analysis experiment where we tested implicit sharing against no implicit sharing.
64 undergraduate U.S. students were recruited ) were recruited through campus flyers and paid $22.50 the study. They were randomly assigned to a team in pairs, leading to 34 teams.
These teams were randomly assigned to one of the two conditions: No Implicit sharing (aka chat only, you can still create and manipulate your stickys, but you don’t see partners stickies), or implicit sharing (you could chat and create and manipulate your stickys and partners stickeis). So 17 teams in each condition.
They first performed a practice task on paper which explained what they should be looking for when finding a serial killer based on tips given by Professional Analysts
Then, they were seated at 2 25” monitors each and trained on using the analysis tool with the features available in condition. They were told that they had one hour to find the name of the serial killer, associate clues, and cases.
After- wards they completed a self-written report about the crimes, clues, and suspects to be arrested. So they would list the name of the serial killer and all the clues recalled.
After that they filled in a survey about experience and performance at clue recognition.
SAVANT was tested in a controlled experiment with fictitious crime data, used in previous works. We used 7 crime cases. Each case had at least some factual information like location, time, weapon type etc.
And at least some interview data of suspects like a victim’s boyfriend saying that the victim called him on her cellphone from the bus around 3:45 pm.
Some cases had more than one document, leading to a total of 20 documents which were equally divided between collaborators.
Such that each collaborator has exclusive access to 8 documents each. And 4 documents about bus route information and a cover sheet for a shared case were shared.
These 20 case documents had 40 suspects in total
Of which there was only 1 serial killer who had committed crime in 4 of the 7 cases, and was also seen in a fifth case. This resembles a Hidden profile task except not all the data is held by a single person.
I”ll be sharing the results of this experiment later on.
If you look at the graph on the right, on average 3.5 clues were recalled in implicit sharing but only 2 were recalled when no implicit sharing occurred. There was a strong positive trend in clue recognition as well. On average 3.2 clues were recognized in the survey when implicit sharing was available, as opposed to 2.44 when not available.
So while teams performed better at identifying clues which are short-term goals, there was no significant difference in the longer term goal of serial killer identification when implicit sharing was available.
If you look at the graph on the right, on average Stickies were rated as 4.2 on 5 point Likert scale in implicit sharing but only 2.9 when no implicit sharing occurred. Similarly it was 3.8 & 2.3 for Analysis Space’s utility.
It is interesting to notice that not only the implicit sharing designs was rated higher when implicit sharing was available but also the combination of implicit and explicit channel together were rated significantly higher.
Based on the interface use logs, Participants interacted and used the design features more when associated actions, and results of the actions were visible to the partner. Almost twice as much connections and almost thrice as many piles, and almost twice as many manipulations of any sort like reading, editing, or moving were made when implicit sharing was available. Now, We are not claiming that infinite amount of pile usage is beneficial but we wanted to see if users would actually find value in them and use them more and they clearly did.
We wanted to know if in the context of this experiment, cognitive workload did increase due to implicit sharing on top of explicit sharing and we found no evidence for that using NASA TLX Scores.
There could be many reasons but an important reason could be the fact that while in implicit sharing more information was available to be parsed – one also had lesser effort to share that information. So these balanced each other.
There could be many reasons but an important reason could be the fact that while in implicit sharing more information was available to be parsed – one also had lesser effort to share that information. So these balanced each other.
Next we wanted to see if implicit sharing would adversely affect the explicit sharing of information by measuring the word count in the chat. We found no support either.
One of the reasons could be that explicit share was the main driving force and implicit sharing acted as an aid instead of replacement.
In fact, Implicit sharing focuses explicit sharing on more useful things.
One of the reasons could be that explicit share was the main driving force and implicit sharing acted as an aid instead of replacement.
In fact, Implicit sharing focuses explicit sharing on more useful things.
So for example:
While “The chat was easily the most helpful because it allowed us to communicate and tell each other specifics about the case. The Stickies were very useful also because they allowed us to make connections between the information we both had independent of talking with each other. [Stickies] allowed us to work more efficiently than wasting both of our time.”
In other words, Implicit Sharing might decrease need to explicit share when addendum information needed to be shared was shared through Stickies implicitly and focus explicit channel as a source for specific information.
On the other hand, Implicit Sharing might increase explicit sharing and help focus on other details. By acting as aids to remind partners to look for pertinent information that they might otherwise forget or ignore when parsing their own dataset independently.
Stickies provided a strong visual metaphor. Several participants mentioned that implicitly shared Stickies helped them “make connections ” and also added value “by comparing information ” or “cross-referencing information ” visually between each other by placing one against the other and promote awareness:
Stickies were not just insights about the data. But were also about how one felt about those insights and the data itself. This is important to understand for example, the value of trust in ones’ data , and interpretation of data.
----- Meeting Notes (4/23/14 13:58) -----
lesser relevant
Secondly, Now remember we studied undergrad students on a limited task in a limited set – when over time data will increase, technologies like NLP would be useful to help us decide how to implicitly share data as well.
Next I”ll briefly go through some designs that exist today to help collaborators analyse togther. However, they all require taking explicit actions if you want to share information.
Next I”ll briefly go through some designs that exist today to help collaborators analyse togther. However, they all require taking explicit actions if you want to share information.
Next I”ll briefly go through some designs that exist today to help collaborators analyse togther. However, they all require taking explicit actions if you want to share information.
And what directions we can take in future to improve collaborative analysis.
Hi! My name is Nitesh Goyal. I’d like to present our work on effects of implicit sharing in collaborative analysis.
In this work, we wanted to do two things
Design an interface to support implicit sharing of notes for distributed collaborative analysis
Empirically test implicit sharing in our interface with an experiment using crime-data where participants would act as analysts.
This work was done in collaboration with Gilly leshed, Dan Cosley, and Susan Fussell at Cornell University.
Hi! My name is Nitesh Goyal. I’d like to present our work on effects of implicit sharing in collaborative analysis.
In this work, we wanted to do two things
Design an interface to support implicit sharing of notes for distributed collaborative analysis
Empirically test implicit sharing in our interface with an experiment using crime-data where participants would act as analysts.
This work was done in collaboration with Gilly leshed, Dan Cosley, and Susan Fussell at Cornell University.
The first big takeaway of the talk is that we should Enable Implicit Sharing as channel for support to explicit communication channel to overcome limitations of explicit sharing and to focus explicit sharing.
SAVANT was tested in a controlled experiment with fictitious crime data, used in previous works. We used 7 crime cases. Each case had at least some factual information like location, time, weapon type etc.
And at least some interview data of suspects like a victim’s boyfriend saying that the victim called him on her cellphone from the bus around 3:45 pm.
Some cases had more than one document, leading to a total of 20 documents which were equally divided between collaborators.
Such that each collaborator has exclusive access to 8 documents each. And 4 documents about bus route information and a cover sheet for a shared case were shared.
These 20 case documents had 40 suspects in total
64 undergraduate U.S. students were recruited ) were recruited through campus flyers and paid $22.50 the study. They were randomly assigned to a team in pairs, leading to 34 teams.
These teams were randomly assigned to one of the two conditions: No Implicit sharing (aka chat only, you can still create and manipulate your stickys, but you don’t see partners stickies), or implicit sharing (you could chat and create and manipulate your stickys and partners stickeis). So 17 teams in each condition.
They first performed a practice task on paper which explained what they should be looking for when finding a serial killer based on tips given by Professional Analysts
Then, they were seated at 2 25” monitors each and trained on using the analysis tool with the features available in condition. They were told that they had one hour to find the name of the serial killer, associate clues, and cases.
After- wards they completed a self-written report about the crimes, clues, and suspects to be arrested. So they would list the name of the serial killer and all the clues recalled.
After that they filled in a survey about experience and performance at clue recognition.
We measured task performance of the collaborators by measuring the number of clues identified and serial killer identification.
The self-filled reports at the end of task was where they wrote down name of the serial killer, associated cases, and clues that they could recall from the task.
After that they filled in a survey about clue recognition which included questions like this multiple choice one where only one correct answer existed.
They also reported their user experience with the Interface using 5 point Likert scale questions. Besides perceived utility, we also measured their real interface usage based on user-logs generated by SAVANT.
They also reported their user experience with the Interface using 5 point Likert scale questions. Besides perceived utility, we also measured their real interface usage based on user-logs generated by SAVANT.
They also reported their user experience with the Interface using 5 point Likert scale questions. Besides perceived utility, we also measured their real interface usage based on user-logs generated by SAVANT.
Collaborative Analysis is a complex problem where one has to iteratively forage and make sense of data while considering multiple solutions and it can be critical.
And, there are multiple reasons why such sharing does not happen between collaborators.
Firstly, the data held by the collaborators could be private. For example the crime cases, the Electronic Medical Records etc are private to the point that the collaborators are not even aware of the existence of such records.
Secondly, the organization policies themselves might be restricting the sharing through differential access control, furthering the lack of data and analysis sharing between organizations that probably should be collaborating much closer.
Cabrera & Cabrera, 2002 discusses Multiple costs of exchanging information from a social dilemma perspective.
Collaborators should believe that the information that they have is useful to others.
And also believe that when not shared, the information will not be available.
Also, collaborators might be just nervous about sharing their incomplete notes and thoughts.
Cabrera et al suggests two ways of getting around this.
First, They suggest reducing cost of contributing by providing incentives to share, or increasing the perceived value of sharing by tilting the pay-off function.
Building upon that, some tools have been developed, since then like AnalyticStream in 2012 that enable sharing information using recommendations made by collaborators in a shared analysis problem.