Irene Barbers Forschungszentrum Juelich GmbHCOUNTER’s new Code of Practice was effective from January 2019. This breakout session will explain how librarians can make effective use of the new metrics to support decision making. It will explain how librarians can use these new reports to: Understand user behaviours; perform cost per use calculations on the articles they have paid for, compare book usage across different e-book platforms, investigate usage of A&I databases and full text databases; and evaluate usage of open access content. The session will also explain how COUNTER is ensuring compliance with the new Code of Practice, and how librarians can confidently tell if a publisher or vendor is compliant.
Using COUNTER Release 5 Usage Reports to support strategic decision making in libraries
1. Using Release 5
Usage Reports
for strategic
decision making
UKSG 42nd Annual Conference Telford | 8-10 April 2019
Irene Barbers, Forschungszentrum Jülich
2. Development of Release 5
Technical Sub-Group for the development of Release 5
Executive Committee
Technical Advisory Group
Input of wider membership through consultation
Members create and maintain the Code of Practice
3. Four Master
Reports are the
Foundation of
COUNTER R5
Reports
Platform Master Report
Database Master Report
Title Master Report
Item Master Report
4. Standard Views
address the
most common
use cases
Title Master Report
• Journal Requests (Excluding OA_Gold)
• Journal Access Denied
• Journal Usage by Access Type
• Journal Requests by YOP (Excluding OA_Gold)
• Book Requests (Excluding OA_Gold)
• Book Access Denied
• Book Usage by Access Type
Database Master Report
• Database Search and Item Usage
• Database Access Denied
Platform Master Report
• Platform Usage
Item Master Report
• Journal Article Requests
• Multimedia Item Requests
5. Investigations & Requests
Investigations report user actions related to a content item or title. Actions on part of
an item or information about an item or the item itself are reported.
Total_Item_Investigations
the total number of times a content item or information related to a content item was accessed.
Unique_Item_Investigations
the number of unique content items investigated by a user in a session.
Unique_Title _Investigations
the number of unique titles investigated by a user in a session (only applies to Books).
COUNTER Release 5: Metric Types
6. Investigations & Requests
requests report user requests for a content item or chapter of a book. A request is
specifically related to viewing or downloading the full content item.
Total_Item_requests
the total number of times the full text of a content item was downloaded or viewed.
Unique_Item_requests
the number of unique content items requested by a user in a session.
Unique_Title _requests
the number of unique titles requested by a user in a session (only applies to Books)
COUNTER Release 5: Metric Types
9. 3
Release 5
1
2
4
Release 4
Title Reports TR_J1 vs JR 1 - Example 1
5
1. Limited to
Controlled usage
only
2. Two metrics per
journal
3. No HTML or PDF
metrics
4. Journal with zero
usage excluded
5. No total for all
journals line
10. Title Reports TR_J1 vs JR 1 – Example 1
• Release 5 TR_J1 report
shows lower usage counts
due to the exclusion of Gold
OA usage
• TR_J1 Usage = JR1 Usage –
JR1 GOA
Release 5: TR_J1
Release 4: JR1
Release 4: JR1 GOA
11. Title Reports TR_J1 vs JR 1 – Example 1
• Total Item Requests vs
Reporting period total
(excluding OA_Gold)
• Unique Item Request vs PDF
+ HTML
Release 5: TR_J1
Release 4: JR1
Release 4: JR1 GOA
12. Title Reports TR_J1 vs JR 1 – Example 2
• Total Item Requests vs
Reporting period total
(excluding OA_Gold)
• Unique Item Request vs PDF
+ HTML
Release 5: TR_J1
Release 4: JR1
Release 4: JR1GOA
13. Unique vs Total / PDF vs HTML
Release 5 Release 4
Total Item
Requests
Unique
Item
Requests
Ratio
Unique vs.
Total
HTML usage
(excluding GOA)
PDF usage
(excluding GOA)
Example 1 704 452 0,64 430 274
Example 2 255 215 0,84 100 156
14. Cost per Use Calculations
1.42
1.42
2.21
3.65
3.92
3.92
4.65
6.45
0.00 1.00 2.00 3.00 4.00 5.00 6.00 7.00
Release 5 TR_J1 Total Item Requests: 704
Release 4 JR1 Reporting Period Total - JR1GOA Reporting Period Total: 704
Release 5 TR_J1 Unique Item Requests: 452
Release 4 JR1 Reporting Period PDF - JR1GOA Reporting Period PDF: 274
Release 5 TR_J1 Total Item Requests: 255
Release 4 JR1 Reporting Period Total - JR1GOA Reporting Period Total: 255
Release 5 TR_J1 Unique Item Requests: 215
Release 4 JR1 Reporting Period PDF - JR1GOA Reporting Period PDF: 155
Cost per Use for Journal with 1.000 EUR/GBP/USD Subscription Fee
15. Title Reports TR_J3 vs JR 1/JR1 GOA
1
2
3
Release 5
Release 4
1. Shows usage by
access type
2. Shows all
Investigations
and Requests
metrics
3. No HTML or PDF
metrics
16. Title Reports TR_J4 vs JR 5
1
2
3
4 5
Release 5
Release 4
1. Limited to
Controlled Usage
only
2. Two metrics for
each journal
3. No year grouping
for older years
4. Usage is shown
per month
17. Title Reports TR_B1 vs BR 2
Release 5
Release 4
1. Publication Year
is shown for each
book
2. Two metric types
per book
3. Unique Title
Requests as a
consistent metric
for all book
providers
1 2
3
18. Release 5
Release 4
1. Usage split out by
access type
2. All applicable metric
types
Title Reports TR_B3 vs BR 2
1
2
19. Cost per Use Calculations
100.00
2.17
4.35
2.17
0.00 20.00 40.00 60.00 80.00 100.00 120.00
Release 5 TR_B1 Unique Title Requests: 1
Release 5 TR_B1 Total Item Requests: 46
Release 5 TR_B3 Unique Item Requests: 23
Release 4 BR2 Reporting Period Total: 46
Cost per Use for Book with 100 EUR/GBP/USD Fee
20. Platform Report PR_P1 vs Platform Report
Release 4
Release 5 1. Regular
Searches and
Searches
Federated rolled
up in Searches
Platform
2. Unique Item
Requests covers
both book and
journal usage
3. Total Item
Requests equals
Record views
1
2
3
21. Database Report DR_D1 vs Platform Report
1
2
Release 4
Release 5 1. Searches
Federated and
Automated are
split in two
separate metrics
2. Result Click
metric has been
replaced by new
metric Total Item
Investigations
22. Searches
There are four different types of search metrics in Release 5.
Searches_Regular
the number of times a user searches a database, where they have actively chosen that database from
a list of options OR there is only one database available to search. Metric captured at the database level.
Searches_Automated
the number of times a user searches a database, where they have not actively chosen that database
from a list of options. Metric captured at the database level.
Searches_Federated
the number of times a search is run through Federated Search Service/Engine. Metric captured at the database level
Searches_Platform
the number of times a user performs a search on the platform, regardless of the number of databases involved in
the search. Metric captured at the platform level.
COUNTER Release 5: Metric Types
23. COUNTER Release 5: Implementation and
Compliance
• Organisations working towards Release 5 compliance:
https://www.projectcounter.org/about/organisations-working-
towards-release-5-compliance/
• Providers that are already successfully audited on Release 5
Almost exactly 3 years ago, at the end of UKSG 2016 in Bournemouth, development of Release 5 of the CoP began with a brainstorming session with some people from the COUNTER Executive Committee. Soon, a technical sub-group for the actual developmental work was established, but of course the Executive Committee and the Technical Advisory Group have been involved all the time.
Literally hundreds of hours in telephone conferences and collaborative work in documents followed, and after the initial draft release, after comments from the community and further development fueled by the feedback, implementation by providers started last year.
With the beginning of 2019, Release 5 is the current code of practice, and since then we are all seeing new reports and new dashboards for usage statistics coming from publishers and vendors.
What I want to do today, after a short overview on the new report structure and metric types, is to look at real live examples of Release 5 Standard Views, and compare those to their corresponding Release 4 reports. I will point out the most important changes, and we will see the effects the new metric types will have on cost per use calculations and see new possibilities for usage analysis.
I will concentrate on Standard Views for journal and book content and will show examples for database reports and platform reports only shortly, depending on how I manage with my time.
At the core of Release 5, there are 4 Master Reports. The Title Master Report deals with book usage and journal usage. We have the Database Master Report for databeses, the Platform Master Report and the Item Master Report. The Item Master Report is intended for usage occurring in repositories and can be used to report usage on multimedia content. The Master Reports contain the complete set of usage data relevant to the usage of the content they report on.
At the same time, Release 5 has developed the so-called Standard Views. They are derived from the Master Reports through application of pre-defined filters and address the most common use cases for usage analysis.
For the Title Master Report, there are seven Standard Views. Three of them deal with book usage, and the other four deal with journal usage. We will look at those again in more detail later on.
The Database Master Report has two Standard Views, the Platform Master Report has one Standard View, and the Item Master Report again has two Standard Views. The Standard View called Content Item Requests deals with usage on repositories whereas the View Multimedia Item Requests can be provided specifically by Multimedia Content Providers.
Before we move on to exploring how these Standard Views look like, let me explain about one of the knew key features of Release 5, the new metric types called investigations and requests
Investigations report user actions related to a content item or title. Actions on part of an item or information about an item or the item itself are reported.
Investigations and Requests each have several metrics.
There are Total_Item_Investigations, that means the total number of times a content item or information related to a content item was accessed during a session.
By contrast, Unique_Item_Investigations represent the number of unique content items investigated by a user in session. That means that if a user repeatedly performs an action with the same content during his session, this will only be counted once.
And then Unique_Title_Investigations, which is the same metric as before on but on Title level. This applies to books only
As to requests, where viewing or downloading full content items is reported, you can see the same pattern as for the investigations: Total_Item_requests, Unique_Item_requests and Unique_Title_requests
And again, I want to mention that Unique_Title_Requests applies only to books. It is very useful metric if you want to calculate cost-per-use on book title level. For example, if a book is delivered in 10 chapters, and all 10 chapters are downloaded, it will count as 10 Unique_Item_Requests, but only one Unique_Title_Request, as those chapters all belong to the same book. This will give you a lot more comparability between book reports from different providers, as there will always be the Unique_Title_Requests metric provided in the reports, regardless of how the content is delivered – in chapters or as entire books.
And here we have a helpful diagram that presents how investigations and requests are two metrics that are closely related.
You can see here a range of user actions, and all of them are counted as investigations or requests.
All user actions that interact with content in any way are counted as investigations, for example abstract views, article previews, or full text views.
But only those actions that deal with actually requesting or viewing or downloading the full text or whole content of a video are at the same time counted additionally as requests, shown in pink.
We will see later on during our presentation how exactly this is represented in the usage reports.
Now let me just show you briefly the Standard Views provided for the Title Master Report.
You see that each Standard View has a Report Identfier.
There are four Standard Views dealing with journal content:
TR_J1 relates to Journal Requests (Excluding OA_Gold)
TR_J2 for Access Denials
Standard View TR_J3, Journal Usage by Access Type, breaks down usage by the attributes Controlled and OA_Gold.
And the last Journal Standard View is TR_J4,which is Journal Requests by Year of Publication.
TR_B1 relates to Book Requests (Excluding OA_Gold)
TR_B2 relates to Access Denials for books
TR_B3 relates to Book Usage by Access Type
Now I move on to real live examples that will show us what all the new metrics look like and how we can use them for analysis:
The first Standard View we are looking at is Journal Requests Excluding OA_Gold and this was designed for one of the most common use cases in libraries, that is librarians wanting to make cost per uses analysis for journal content they have paid for.
In contrast to JR1 from Release 4, there are several major changes
TR_J1 is Limited to Controlled usage only
Two metrics per journal
No HTML and PDF metrics
Journals with zero usage are excluded
No total for all journals line – this wouldn’t make sense because we have two metrics per journal that you cannot add up
We can see here the effect of one of the key changes which is the exclusion of Gold OA usage in Release 5
Release 5 TR_J1 report shows lower usage counts due to the exclusion of Gold OA usage
(TR_J1 Usage = JR1 Usage – JR1 GOA)
Excluding OA_Gold is a real advantage in contrast to Release 4. In Reporting along Release 4, if you want to calculate cost per use for the content you have paid for, you have to retrieve two reports, the JR 1 and JR1GOA, and you have to take away the totals from JR1GOA from the totals in JR1. With this new Standard View, you don’t have to do this any more, because OA_Gold is excluded automatically and this will make the cost per use analysis much easier.
If we focus on the metrics shown in the reports, we see that Total Item Requests in Release 5 corresponds to Reporting period total in Release 4, but the count is slightly lower because of the exclusion of the OA usage.
Unique Item Requests however, is different. In this case, the number of Unique Item Requests is more than the PDF count and more than the HTML count we see in Release 4, which is due to the fact that in Release 5, usage is counted independently of the format that was delivered
But with the unique metric, we avoid overcounting because if a user has been viewing HTML full content and then in the same session downloads the PDF of the same article, this is only counted once
At the same time, we avoid missing out usage that happend via HTML only which is the case if you were used to only taking PDF usage into account for analysis.
In the example we see here, there has been quite a high proportion of unique HTML usage taken into account which has not lead to PDF requests
Depending on the publisher the usage is reported on and depending on user behaviour, the effect of change can be quite different. The proportion of HTML usage according to Release 4 here is lower than in the previous example, here we have even less HTML counts than PDF counts
I have put together here the counts from both examples. Again, in the first example, we have the high proportion of HTML usage compared to PDF usage and we can presume that there is a high percentage of cases where a user went from HTML fulltext view to PDF download of the same article.
In the second example, HTML usage is lower than PDF usage, and we can conclude both from this fact and from the high count of unique item requests that there is a high percentage of cases where users viewed a HTML fulltext but did not download the PDF and that PDF usage was quite independent of HTML views.
Therefore in the second example, Total and Unique Requests are much more closely together, as you can see in the ratio.
Bearing different delivery methods of content in mind, where sometimes the landing page on a platform is the full text HTML and sometimes not, it is at the moment admittedly a bit difficult to adapt to the shift in usage counts and to the effect caused by different delivery methods
But in the long run, we believe that Unique Item Requests will prove to be a very robust metric and will help to make usage on different platforms more comparable
Now what does all this mean for our cost per use calculations? I have used the counts from our two examples to show where we arrive
As the counts for Total Item Requests and Reporting Period Totals are the same, we get of course the same cost per use indicator. So librarians who focused on Reporting Period Totals in Release 4 will not experience any difference if they now use Total Item Requests
Comparing Unique Item Requests to Reporting Period PDF, there is a pronounced difference, so librarians used to calculate with Reporting Period PDF now have to adapt to having a higher count in Unique Item Requests and therefore lower cost per use indicators
Before I move on to the Book Reports, I just want to show you quickly two other Journal Reports.
The Standard View Journal Requests by Access Type shows controlled usage and OA usage and additionally all investigations and request metrics, and this gives you the opportunity to look specifically for OA usage and compare it to usage of controlled content.
Now the Report Journal Requests by Year of Publication is
Limited to Controlled usage only, like TR_J1 Journal Requests
Shows two metrics for each journal, again like TR_J1 Journal Requests
In contrast to Releas 4, there is no year grouping for older years
Each year has a separate line rather than being a column (JR 5 in Release 4 was a crosstab or matrix format)
Usage is shown per month
(0001 is for unkown YOP, 9999 would be used for articles in press)
So we can use this Standard View for cost per use analysis as well but we can choose the publication year as a filter to analyse usage of current content or of backfile content.
You can also use Excel to make a pivot table to get aggregated usage per journal and year of publicaton if you put the titles in rows and YOP in columns.
Now we proceed to the Book Reports
Publication year is shown for each book and through all Book Standard Views, so there is no need for an extra View by YOP like we have for the Journal Views
We have two metric types per book and these are different from the metric types used in the journal reports. Besides Total Item Requests this report presents Unique Title Requests as a consistent metric for all book providers, you remember I already mentioned this new metric when I talked about investigations and requests metrics.
With R5, the book’s Unique_Title metrics are only increased by 1 no matter how many (or how many times) chapters or sections were accessed in a given user session. Unique_Title metrics provide comparable eBook metrics regardless of the nature of the platform and how eBook content was delivered.
With R5, the book’s Unique_Title metrics are only increased by 1 no matter how many (or how many times) chapters or sections were accessed in a given user session. Unique_Title metrics provide comparable eBook metrics regardless of the nature of the platform and how eBook content was delivered.
In comparison, BR 2 Release 4. BR 2 was the book report provided most commonly by publishers, so I am comparing with this report here, and not BR 1.
BR 1 was not provided by all book publishers, and section requests from BR2 was the commonly used metric
The Standard View Book Requests by Access Types is interesting for two reasons: first, you can now see OA usage in book content, and second, e more metric types are shown than in TR_B1. Additionally to Total Item Requests and Unique Title Request I would like to draw your attention to Unique Item Requests. This shows a number we didn’t have at all in Release 4. BR2 in Release 4 only presented section requests and you can see that this compares to total item requests in Release 5.
The Unique Item Requests metric here indicates that in this use case here, the whole book (I checked, it consists of 23 chapters) has been downloaded twice by the same user in the same session. It would have been impossible to make this out with a Release 4 book report.
And again, this chart shows what will happen to our cost per use indicators.
At the bottom, we have the cost per use figure we have been getting with Release 4 usage data, above you can see the different indicators we could get out of Release 5 usage data.
Cost per use indicator for unique title usage is on a completely different level than cost per use by sections or chapters
But, as I already mentioned, Unique Title Requests should be your best choice for obtaining comparable cost per use for books across different platforms
On the last slides, I just want to cover Platform Reports and Database Reports and show the changes in the metrics that are displayed.
In the Platform Report PR_P1, Regular Searches and Searches Federated are rolled up in Searches Platform
Again, we have Unique Item Requests that in the Platform Report covers both book and journal usage, depending on the content delivered on the platform
The PR_P1 also includes Unique Title Requests, not shown in the example here
And the metric Total Item Requests equals Record views from Release 4
In the Database Report, we have Searches Regular, then Searches Federated and Automated are split into two separate metrics.
The Result Click metric from Release 4 has been replaced by the new metric Total Item Investigations, and we also have Total Item Requests.
Unique Item metrics also exist for the Database Report, and you’ll find them in the Database Master Report.
To sum it up, here is an overview of the different types of search metrics used in Database Reports and Platform Reports Searches_Regular are reported when a user has actively chosen a database from a list of options or there is only one database available and the search is performed directly within this database.
The next two search metric types here relate to the more automated ways of searching.
And finally Searches_Platform tracks the performed searches at the platform level
We are nearing the end of my talk and if you would like to know how the implementation of Release 5 is proceeding among publishers, you will find an overview on the COUNTER website where you can see if a provider has declared COUNTER Release 5 compliance or even if they have already passed the audit and are certified Release 5 compliant, which you can recognize by the compliance logo.
Next I want to make you aware of the COUNTER online tutorials which you can find on the COUNTER youtube channel. We have created several so-called Foundation Classes dealing with different aspects of Release 5, and there are more to come.
Class 1: Metric Types
Class 2: Reports
Class 3: Metrics and Reports: Putting it all together
Class 4: Attributes, elements and other slightly techy things
Class 5: Book Reports
Another thing that we have done for Release 5 is we have set up two email forums. If you’re not already aware of them of even subscribed, just drop an email to Lorraine Estelle who will set you up. There is one particularly for librarians and one for publishers and vendors, and questions around Release 5 can be posted and answered through these email forums and we hope that the forums will be a place where best practice around Release 5 can be shared by the community.