This presentation was provided by Todd Carpenter, Executive Director of NISO, and Nettie Lagace, NISO on June 25, during a ALA session devoted to Altmetrics.
1. Standards and Recommended
Practices to Support
Adoption of Altmetrics
American Library Association Annual Conference
Orlando, FL
June 25, 2016
2. Non-profit industry trade association
accredited by ANSI
Mission of developing and maintaining technical
standards related to information, documentation,
discovery and distribution of published materials
and media
Volunteer driven organization: 400+ contributors
spread out across the world
Responsible (directly and indirectly) for standards
like ISSN, DOI, Dublin Core metadata, DAISY digital
talking books, OpenURL, MARC records, and ISBN
About
June 25, 2016 2
13. June 25, 2016 13
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
Unimportant
Of little importance
Moderately important
Important
Very important
n=118
Community Feedback on Project Idea Themes
17. Caveats - important
• Citations, usage, altmetrics are ALL potentially
important and potentially imperfect
• Please don’t use altmetrics as uncritical proxy for
scholarly impact – must consider quantitative and
qualitative information too
• data quality and indicator construction are key factors
in the evaluation of specific altmetrics (read as: this is
important – garbage in, garbage out!)
June 25, 2016 17
18. NISO Altmetrics
Working Group A
Charge:
Development of specific definitions for
alternative assessment metrics – This
working group will come up with
specific definitions for the terms
commonly used in alternative
assessment metrics, enabling different
stakeholders to talk about the same
thing. This work will also lay the
groundwork for the other working
groups.June 25, 2016 18
19. NISO Altmetrics
Working Group A
Charge:
Descriptions of how the main use cases
apply to and are valuable to the different
stakeholder groups – Alternative
assessment metrics can be used for a
variety of use cases from research
evaluation to discovery. This working
group will try to identify the main use
cases, the stakeholder groups to which
they are most relevant, and will also
develop a statement about the role of
alternative assessment metrics in research
evaluation.June 25, 2016 19
20. Process
• Discussion, Research, Discussion, Research!
• WGA extensively studied the altmetrics literature, other
communications
• Discussed in depth various stakeholders' perspectives
and requirements for these new evaluation measures
• Iterations!
• Need to write agreed-upon def’n to use consistently,
across all parties – can’t be narrow
June 25, 2016 20
21. What is Altmetrics? Definition
Altmetrics is a broad term that encapsulates the digital collection, creation, and
use of multiple forms of assessment that are derived from activity and
engagement among diverse stakeholders and scholarly outputs in the research
ecosystem.
The inclusion in the definition of altmetrics of many different outputs and forms of
engagement helps distinguish it from traditional citation-based metrics, while at
the same time, leaving open the possibility of their complementary use, including
for purposes of measuring scholarly impact.
However, the development of altmetrics in the context of alternative assessment
sets its measurements apart from traditional citation-based scholarly metrics.
June 25, 2016 21
22. Use Cases
Very important! Developed eight personas, three themes:
Showcase achievement: Indicates stakeholder interest in highlighting the
positive achievements garnered by one or more scholarly outputs.
Research evaluation: Indicates stakeholder interest in assessing the impact or
reach of research.
Discovery: Indicates stakeholder interest in discovering or increasing the
discoverability of scholarly outputs and/or researchers.
.
June 25, 2016 22
28. Glossary (of course!)
• Activity. Viewing, reading, saving, diffusing, mentioning, citing, reusing,
modifying, or otherwise interacting with scholarly outputs.
• Altmetric data aggregator. Tools and platforms that aggregate and offer
online events as well as derived metrics from altmetric data providers, for
example, Altmetric.com, Plum Analytics, PLOS ALM, ImpactStory, and
Crossref.
• Altmetric data provider. Platforms that function as sources of online events
used as altmetrics, for example, Twitter, Mendeley, Facebook, F1000Prime,
Github, SlideShare, and Figshare.
• Attention. Notice, interest, or awareness. In altmetrics, this term is frequently
used to describe what is captured by the set of activities and engagements
generated around a scholarly output.
June 25, 2016 28
29. Glossary (much more...)
• Engagement. The level or depth of interaction between users and scholarly
outputs, typically based upon the activities that can be tracked within an
online environment. See also Activity.
• Impact. The subjective range, depth, and degree of influence generated by
or around a person, output, or set of outputs. Interpretations of impact vary
depending on its placement in the research ecosystem.
• Metrics. A method or set of methods for purposes of measurement.
• Online event. A recorded entity of online activities related to scholarly output,
used to calculate metrics.
June 25, 2016 29
30. Glossary (much more...)
• Scholarly output. A product created or executed by scholars and investigators in the
course of their academic and/or research efforts. Scholarly output may include but is
not limited to journal articles, conference proceedings, books and book chapters,
reports, theses and dissertations, edited volumes, working papers, scholarly editions,
oral presentations, performances, artifacts, exhibitions, online events, software and
multimedia, composition, designs, online publications, and other forms of intellectual
property. The term scholarly output is sometimes used synonymously with research
outputs.
• Traditional metrics. The set of metrics based upon the collection, calculation, and
manipulation of scholarly citations, often at the journal level. Specific examples include
raw and relative (field-normalized) citation counts and the Journal Impact Factor.
• Usage. A specific subset of activity based upon user access to one or more scholarly
outputs, often in an online environment. Common examples include HTML accesses
and PDF downloads.
June 25, 2016 30
32. Code of Conduct
• Why a Code of Conduct?
• Scope
• Altmetric Data Providers vs.
Aggregators
June 25, 2016 32
33. Working Group C
• Scope: The Code of Conduct aims to improve the
quality of altmetric data by increasing the
transparency of data provision and aggregation as
well as ensuring replicability and accuracy of online
events used to generate altmetrics. It is not
concerned with the meaning, validity or
interpretation of indicators derived from that data.
Altmetric online events include online activities
“derived from engagement between diverse
stakeholders in the research ecosystem and various
scholarly outputs”, as defined in NISO WG A
Definition of Altmetrics
June 25, 2016 33
34. Code of Conduct Key Elements
• Transparency
• Replicability
• Accuracy
June 25, 2016 34
35. Code of Conduct: Transparency
• How data are generated, collected, and
curated
• How data are aggregated, and derived data
generated
• When and how often data are updated
• How data can be accessed
• How data quality is monitored
June 25, 2016 35
36. Code of Conduct: Replicability
• Provided data is generated using the same methods over
time
• Changes in methods and their effects are documented
• Changes in the data following corrections of errors are
documented
• Data provided to different users at the same time is
identical or, if not, differences in access provided to
different user groups are documented
• Information is provided on whether and how data can be
independently verified
June 25, 2016 36
37. Code of Conduct : Accuracy
• The data represents what it purports to reflect
• Known errors are identified and corrected
• Any limitations of the provided data are
communicated
June 25, 2016 37
38. Code of Conduct: Reporting
List all available data and metrics (providers & aggregators) and altmetrics data providers from which data are collected (aggregators).
Provide a clear definition of each metric provided.
Describe the method(s) by which data is generated or collected and how this is maintained over time.
Describe any and all known limitations of the data provided.
Provide a documented audit trail of how and when data generation and collection methods change over time with any and all known effects of these changes,
including whether changes were applied historically or only from change date forward.
Describe how data is aggregated.
Detail how often data is updated.
Provide the process of how data can be accessed.
Confirm that data provided to different data aggregators and users at the same time is identical and, if not, how and why they differ.
Confirm that all retrieval methods lead to the same data and, if not, how and why they differ.
Describe the data quality monitoring process.
Provide process by which data can be independently verified (aggregators only).
Provide a process for reporting and correcting suspected inaccurate data or metrics.
June 25, 2016 38
40. Charge
• Definitions for appropriate metrics and calculation
methodologies for specific output types. Research
outputs that are currently underrepresented in
research evaluation will be the focus of this working
group. This includes research data, software, and
performances, but also research outputs commonly
found in the social sciences.
• Promotion and facilitation of use of persistent
identifiers in scholarly communications. Persistent
identifiers are needed to clearly identify research
outputs for which collection of metrics is desired, but
also to describe their relationships to other research
outputs, to contributors, institutions and funders.
June 25, 2016 40
42. Recommendations re Data Metrics
• Metrics on research data should be made
available as widely as possible
• Data citations should be implemented following
the Force11 Joint Declaration of Data Citation
Principles, in particular:
– Use machine-actionable persistent identifiers
– Provide metadata required for a citation
– Provide a landing page
– Data citations should go into the reference list or
similar metadata.
June 25, 2016 42
43. Recommendations re Data Metrics
• Standards for research-data-use statistics need to be
developed.
– Based on COUNTER; consider special aspects of research
data
– Two formulations for data download metrics: examine
human and non-human downloads
• Research funders should provide mechanisms to
support data repositories in implementing standards
for interoperability and obtaining metrics.
• Data discovery and sharing platforms should support
and monitor “streaming” access to data via API
queries.
June 25, 2016 43
47. Initial
• Metrics from
provider
• Ad-hoc
Repeatable
• Common
measurement
criteria from
provider
• Documented
measurements
and processes
• Comparable and
consistent
Defined
• Measurements
defined/confirmed
as a standard for
provider
• Made public
• Business processes
followed
consistently
• Transparent
Managed
• Standards applied
• Controls in place
• Checks and
balances repeated
over time
• Open for comment
and feedback
• Accountable
Governed
• Independent
verification or 3rd
party audit
• Evolving common
industry defined
standards
• Trust and
confidence
Maturity Model for Standards Adoption
Increasing trust and confidence in altmetrics
June 25, 2016 47
51. Thank you to the
dozens of people on the working groups
and
the hundreds of people who participated
in brainstorming and commenting
on this effort!
June 25, 2016 51