2. Agenda
• Overview
– What is impact?
– Traditional impact metrics
– Non-traditional evidence
• Hands-on
– Institutional Repositories
– Web of Knowledge
– Cited Reference Search
– Google Scholar
• Panel Q&A
September 6, 2012
3. What can librarians do for you?
• Guide you to quality sources of impact
evidence
• Assist you in interpreting the context of
impact evidence for your scholarly products
• Assist you in planning dissemination of your
scholarly products
September 6, 2012
5. What is impact?
• Garfield distinguished between impact with
influence (Leyesdorff, 2009)
• “Experience has shown that in each specialty the
best journals are those in which it is most difficult
to have an article accepted, and these are the
journals that have a high impact factor.” Garfield,
2000
• Effects and outcomes, in terms of value and
benefit, associated with the use of knowledge
produced through research (Beacham et al, 2005)
September 6, 2012
6. Measuring impact
• Proxy for expert evaluation
• Typically citation-based, however citations ≠
quality
• Levels of evidence
– Journal-level
– Article-level
– Scholar-level
• How can you use these in your dossier?
– What is the value of these metrics?
September 6, 2012
8. Impact Factor (ISI)
• Journal-level metric
• Average number of citations received per paper
• Timeframe: previous 2 years
• Updates: annually
• Issues: Discipline-dependent, average number of
citations per paper is not normal distribution (mean
is not valid), journal self-citation, can be affected by
editorial policies
• Use to: provide range for your discipline
September 6, 2012
9. h-index
• Scholar-level metric
• Attempts to measure both the productivity and
impact of the published works
• Timeframe & Updates: depends on source
• Can be manually determined using citation
databases or calculated automatically
– Web of Science, Scopus, Google Scholar
– Google Scholar has broader coverage but smaller
databases tend to be more accurate
• Use to: compare to other scholars at your stage
September 6, 2012
10. i10 index (Google Scholar)
• One of several Google Scholar Metrics (GSM™)
• Article-level metric
• Number of publications with at least 10 citations
• Sources: unknown, no master list, could change
• Timeframe: articles published 2007-2011, indexed in
Google Scholar as of April 1, 2012
• Updates: unclear (info accurate as of April 2012)
• Use to: compare to other scholars in your discipline
September 6, 2012
11. Eigenfactor & Article Influence scores
• Eigenfactor = journal-level metric
• Article Influence = article-level metric
• Timeframe: previous 5 years
• Updates: annually
• Based on number of incoming citations; citations
from highly ranked journals weighted to make a
larger contribution
• Source: Journal Citation Reports (JCR)
• Adjusts for differences across disciplines
• Use to: provide range for your discipline
September 6, 2012
13. “Altmetrics”
• Includes things like page hits, downloads, Twitter
mentions, etc.
• Timeframe: immediate to short-term impact
• Sources: focus is on social media
• Use to: provide measure of more immediate
impact and impact outside discipline, academia
• Includes formal metrics, such as:
– Total-impact.org
– PLoS article-level metrics
September 6, 2012
14. Informal metrics
• Acceptance rates for journals
• Visibility of item or scholar
• Ownership count (libraries)
• Indexed in major databases
• View & download statistics
• Editors/sponsoring organizations
• Use to: supplement traditional metrics
September 6, 2012
15. Other relevant evidence
• Scholarship of Teaching & Learning
– Learning object repositories, instructional content,
innovative use of technology, syllabi, etc.
• Grey literature
– Conference materials, white papers, unpublished
reports
• Impact on the community
– Media, changes in policy, law, or programs
September 6, 2012
16. Context: Defining quality
• Timeframe
• Scope (source coverage, metric level)
• Source/authority
• Reliability/Accuracy
• Relevance
Trends: interdisciplinary; new scholarly products;
impact on diverse populations, communities, problems;
collaborative work; training students
September 6, 2012
17. References
• Beacham, B., Kalucy, L., McIntyre, E. (2005). Focus on
Understanding and Measuring Research Impact. Retrieved from
http://www.phcris.org.au/phplib/filedownload.php?file=/elib/lib/d
ownloaded_files/publications/pdfs/phcris_pub_3236.pdf
• Cozar, & Clavijo, . (2012). Google Scholar Metrics: An unreliable tool
for assessing scientific journals. El Professionale de la Informacion,
21(4), 419-427.
• Garfield, E. (1986). Journal impact vs. influence: A need for 5-year
impact factors. Information Processing & Management, 22(5), 445.
• Garfield, E. (2000). The use of JCR and JPI in measuring short and
long term journal impact. Presented at the Council of Scientific
Editors Annual Meeting: San Diego, CA.
• Leyesdorff, L. (2009). How are new citation-based journal indicators
adding to the bibliometric toolbox? Journal of the American Society
for Information Science and Technology, 60(7), 1327-1336.
September 6, 2012
Notas del editor
Notes:
Subject librarians have specific areas of expertise, similar to faculty, but this broad expertise is related to the literature, products, and common practices in scholarly communications. This expertise means that they can direct you to quality sources of evidence relevant to your field. We’ll be talking about common tools and resources useful for everyone, since we don’t have time to talk about the nuances of each discipline today.
Impact – is it the immediate change on the local environment? Or does it measure long-term changes in the larger environment?
Notes:
Point: everyone has a slightly different take on what impact means you should consider what your Chair/Dean values, as well as those in your field
Notes:
-Impact metrics are more used than ever because we have more literature than any person can keep up with
-Traditional metrics are citation-based and work fairly well when the scholarly conversation is contained within the published literature; however, the conversation in many fields has been expanded to include platforms like blogs, Twitter, and Mendeley
Levels of evidence
-an issue with journal-level impact factors is that they aren’t useful to individual scholars; impact factors do not have a normal distribution (many, many articles 30-50% by some estimates, are never cited), so the typical article do not have the averaged impact factor the journal-level impact factor does not tell you accurately about the impact of a particular article or scholar
-article-level impact factors are newer, but are typically more useful to particular scholars; however, it can take several years for the impact of an article to be demonstrated; this is not helpful if you are a new researcher
-scholar-level impact factors are also relatively new, but can be the most relevant to your P&T dossier;
Notes:
-underestimates the citations of the most cited articles while exaggerating the number of citations of the majority of articles
Notes:
Meho & Yang: Web of Knowledge was found to have strong coverage of journal publications, but poor coverage of high impact conferences. Scopus has better coverage of conferences, but poor coverage of publications prior to 1996; Google Scholar has the best coverage of conferences and most journals (though not all), but like Scopus has limited coverage of pre-1990 publications.
Notes:
Google Scholar Metrics site: http://scholar.google.com/intl/en/scholar/metrics.html
Google Scholar Citations includes absolute citations, h-index and i10-index
Tentative, this may not be included
Notes:
Altmetrics – measures more immediate impact of your publication than traditional impact metrics; attempt to be more relevant to researcher and article/item
“Article-Level Metrics only make sense in context, and the most important ones are probably article age, subject area and journal.” Posted at http://api.plos.org/2012/07/20/example-visualizations-using-the-plos-search-and-alm-apis/
Acceptance rates for journals – there are some discipline-specific lists out there, but may be best to contact publishers for most current data
http://www.library.unt.edu/discoverypark/resources/research-tools/journal-article-acceptance-rates/
http://guides.lib.umich.edu/content.php?pid=98218&sid=814212
http://www.stjohns.edu/academics/libraries/resources/journals.stj
Visibility of an item: book reviews, links to an item (i.e., blog, website, etc.), media coverage, reputation of individuals reviewing/linking to your item
Ownership count: Worldcat.org
Point: if engaging in progressive forms of scholarship, point out relevance to traditional forms (i.e., authority of sources linking, reviewing, using your scholarship)
Notes:
Gather case samples from IUPUI
These may be relevant if they are highly cited, used, or covered by media
These are aspects of the evidence you will want to consider when choosing what to present. If you are using non-traditional metrics, you’ll need to provide context and describe why the metrics you’ve selected are strong indicators of your impact.
Bibliometric data from a journal only make sense if you can compare them with the publications in the same league, that is, same discipline or scientific area. Cozar & Clavijo, 2012