Todd Carpenter's Presentation during the Library Assessment Conference 2014 in Seattle, WA on August 2014 at University of Washington. During this presentation, Todd covered the output of Phase One of NISO's alternative metrics assessment initiative.
1. When
assessing
impact,
be
sure
you’re
comparing
digital
apples
to
digital
apples:
Report
of
Phase
1
of
NISO
New
Assessment
IniBaBve
Todd Carpenter
Executive Director, NISO
August 4, 2014
August
4,
2014
1
2. ! Non-‐profit
industry
trade
associa9on
accredited
by
ANSI
! Mission
of
developing
and
maintaining
technical
standards
related
to
informa9on,
documenta9on,
discovery
and
distribu9on
of
published
materials
and
media
! Volunteer
driven
organiza9on:
400+
contributors
spread
out
across
the
world
! Responsible
(directly
and
indirectly)
for
standards
like
ISSN,
DOI,
Dublin
Core
metadata,
DAISY
digital
talking
books,
OpenURL,
MARC
records,
and
ISBN
About
August
4,
2014
2
16. Steering
CommiYee
• Euan
Adie,
Altmetric
• Amy
Brand,
Harvard
University
• Mike
Buschman,
Plum
Analy9cs
• Todd
Carpenter,
NISO
• Mar9n
Fenner,
Public
Library
of
Science
(PLoS)
(Chair)
• Michael
Habib,
Reed
Elsevier
• Gregg
Gordon,
Social
Science
Research
Network
(SSRN)
• William
Gunn,
Mendeley
• Neae
Lagace,
NISO
• Jamie
Liu,
American
Chemical
Society
(ACS)
• Heather
Piwowar,
ImpactStory
• John
Sack,
HighWire
Press
• Peter
Shepherd,
Project
Counter
• Chris9ne
Stohn,
Ex
Libris
• Greg
Tananbaum,
SPARC
(Scholarly
Publishing
Academic
Resources
Coali9on)
August
4,
2014
16
17. AlternaBve
Assessment
IniBaBve
Phase
1
MeeBngs
October
9,
2013
-‐
San
Francisco,
CA
December
11,
2013
-‐
Washington,
DC
January
23-‐24
-‐
Philadelphia,
PA
Round
of
1-‐on-‐1
interviews
–
March/Apr
Phase
1
report
published
in
June
2014August
4,
2014
17
18. Mee9ngs’
General
Format
• Collocated
with
other
industry
mee9ng
• Morning:
lightning
talks,
post-‐it
brainstorming
• Agernoon:
discussion
groups
– X
– Y
– Z
– Report
back/react
• Live
streamed
(video
recordings
are
available)
August
4,
2014
18
19. Mee9ng
Lightning
Talks
• Expecta9ons
of
researchers
• Exploring
disciplinary
differences
in
the
use
of
social
media
in
scholarly
communica9on
• Altmetrics
as
part
of
the
services
of
a
large
university
library
system
• Deriving
altmetrics
from
annota9on
ac9vity
• Altmetrics
for
Ins9tu9onal
Repositories:
Are
the
metadata
ready?
• Snowball
Metrics:
Global
Standards
for
Ins9tu9onal
Benchmarking
• Interna9onal
Standard
Name
Iden9fier
• Altmetric.com,
Plum
Analy9cs,
Mendeley
reader
survey
• TwiYer
Inconsistency
August
4,
2014
19
“Lightning
by
snowpeak
is
licensed
under
CC
BY
2.0
22. SF
Mee9ng
Discussions
• Business
Use
cases
– Publishers
want
to
serve
authors,
make
money
– People
don’t
value
a
standard,
they
value
something
that
helps
them
– …
Couldn’t
iden9fy
a
logical
standard
need
that
actors
in
the
space
would
value,
and
best
prac9ces
are
of
interest
• Quality
Data
science
– Themes:
context,
valida9on,
provenance,
quality,
descrip9on
metadata
– We'll
never
get
to
the
point
where
assessment
can
be
done
without
a
human
in
the
loop,
but
discovery
and
recommenda9on
can
• Defini9ons
– Define
“ALM”
and
“Altmetrics”
– Map
the
landscape
– We'll
never
get
to
the
point
where
assessment
can
be
done
without
a
human
in
the
loop,
but
discovery
and
recommenda9on
can
August
4,
2014
22
23. DC
Mee9ng
Discussions
• Business
and
Use
Cases
• Discovery
– metrics
only
get
generated
if
material
is
discovered
• Qualita9ve
vs.
Quan9ta9ve
• Iden9fying
Stakeholders
and
their
Values
– stakeholders
in
outcomes
/
stakeholders
in
process
of
crea9ng
metrics
– shared
values
but
tensions
– branding
• Defini9ons/Defining
Impact
– metrics
and
analyses
– what
led
to
success
of
cita9on?
– how
to
be
certain
we
are
measuring
the
right
things
• Future
Proofing
– what
won't
change
– impact
-‐
hard
to
establish
across
disciplines
August
4,
2014
23
24. Philly
Mee9ng
Discussions
• Defini9ons
– Define
life
cycle
of
scholarly
output
and
associated
metrics
– Qualita9ve
versus
Quan9ta9ve
aspects
-‐
what
is
possible
to
define
here
– Consider
other
aspects
of
these
data
collec9ons
• Standards
– Develop
defini9ons
(what
is
a
download?
what
is
a
view?)
– Differen9ate
between
scholarly
impact
versus
popular/social
use
– Define
sources/characteris9cs
for
metrics
(social,
commercial,
scholarly)
• Data
Integrity
– Counter
biases/gaming
– Associa9on
with
credible
en99es
-‐
e.g.
ORCID
ID
v.
gmail
account
– Reproduceability
is
key
– Everyone
needs
to
be
at
the
table
to
establish
overall
credibility
• Use
cases
(3X)
August
4,
2014
24
27. 30
One-‐on-‐One
Interviews
August
4,
2014
27
Takeaways:
Problems
with
term
“Altmetrics”
Don’t
conflate
discovery,
social
interac9ons,
and
assessment
Be
broad
in
what
is
an
output
Clear
in
the
item
level
iden9fica9on
Consistency
in
methodologies
to
calculate
metric
Linking
Alignment
with
iden9fying/naming
conven9ons
for
new
research
output
forms.
28. PotenBal
work
themes
DefiniBons
ApplicaBon
to
types
of
research
outputs
Discovery
implicaBons
Research
evaluaBon
Data
quality
and
gaming
Grouping,
aggregaBng,
and
granularity
Context
AdopBon
August
4,
2014
28
29. Poten9al
work
themes
DefiniBons
Applica9on
to
types
of
research
outputs
Discovery
implica9ons
Research
evalua9on
Data
quality
and
gaming
Grouping,
aggrega9ng,
and
granularity
Context
Adop9on
August
4,
2014
29
30. Poten9al
work
themes
Defini9ons
ApplicaBon
to
types
of
research
outputs
Discovery
implica9ons
Research
evalua9on
Data
quality
and
gaming
Grouping,
aggrega9ng,
and
granularity
Context
Adop9on
August
4,
2014
30
31. Poten9al
work
themes
Defini9ons
Applica9on
to
types
of
research
outputs
Discovery
implicaBons
Research
evalua9on
Data
quality
and
gaming
Grouping,
aggrega9ng,
and
granularity
Context
Adop9on
August
4,
2014
31
32. Poten9al
work
themes
Defini9ons
Applica9on
to
types
of
research
outputs
Discovery
implica9ons
Research
evaluaBon
Data
quality
and
gaming
Grouping,
aggrega9ng,
and
granularity
Context
Adop9on
August
4,
2014
32
33. Poten9al
work
themes
Defini9ons
Applica9on
to
types
of
research
outputs
Discovery
implica9ons
Research
evalua9on
Data
quality
and
gaming
Grouping,
aggrega9ng,
and
granularity
Context
Adop9on
August
4,
2014
33
34. Poten9al
work
themes
Defini9ons
Applica9on
to
types
of
research
outputs
Discovery
implica9ons
Research
evalua9on
Data
quality
and
gaming
Grouping,
aggregaBng,
and
granularity
Context
Adop9on
August
4,
2014
34
35. Poten9al
work
themes
Defini9ons
Applica9on
to
types
of
research
outputs
Discovery
implica9ons
Research
evalua9on
Data
quality
and
gaming
Grouping,
aggrega9ng,
and
granularity
Context
Adop9on
August
4,
2014
35
36. Poten9al
work
themes
Defini9ons
Applica9on
to
types
of
research
outputs
Discovery
implica9ons
Research
evalua9on
Data
quality
and
gaming
Grouping,
aggrega9ng,
and
granularity
Context
AdopBon
PromoBon
August
4,
2014
36
37. AlternaBve
Assessment
IniBaBve
Phase
2
PresentaBons
of
Phase
1
report
(June
2014)
PrioriBzaBon
Effort
(June
-‐
Aug,
2014)
Project
approval
(Sept
2014)
Working
group
formaBon
(Oct
2014)
Consensus
Development
(Nov
2014
-‐
Dec
2015)
Trial
Use
Period
(Dec
15
-‐
Mar
16)
PublicaBon
of
final
recommendaBons
(Jun
16)
August
4,
2014
37
38. AlternaBve
Assessments
of
our
Assessment
IniBaBve
White
paper
downloaded
3,599
in
60
days
21
substan9ve
comments
received
120
in-‐person
and
virtual
par9cipants
at
the
mee9ngs
These
three
mee9ngs
aYracted
more
than
400
RSVPs
Goal:
generate
about
40
ideas,
in
total,
generated
more
than
250
Recordings
of
project
work
downloaded
more
than
10,000
9mes
More
than
440
direct
tweets
using
the
#NISOALMI
hashtag
Five
ar9cles
in
tradi9onal
news
publica9ons
15
blog
posts
about
the
ini9a9ve
August
4,
2014
38