1. Are Users Finding Our Online
Reference Resources?
RUSA Seminar
November 20, 2013
Lettie Y. Conrad
Executive Manager, Online Products
Los Angeles | London | New Delhi
Singapore | Washington DC
2. SAGE overview
● Independent, global scholarly publisher
● Books, journals, reference, databases
Los Angeles | London | New Delhi
Singapore | Washington DC
3. SAGE Discoverability White Paper
● Best practices for access and discovery
of content in libraries
● Big problems that publishers, vendors,
and libraries need to solve
● Real solutions that librarians and
publishers can implement
● Further observations for improving
discoverability and visibility
Source: Somerville, M. M., Schader, B. J., and Sack, J. R. Improving Discoverability of Scholarly Content in
the Twenty-First Century: Collaboration Opportunities for Librarians, Publishers, and Vendors. A White
Paper commissioned by SAGE. Thousand Oaks, CA: SAGE, 2012.
http://www.sagepub.com/repository/binaries/librarian/DiscoverabilityWhitePaper/
Los Angeles | London | New Delhi
Singapore | Washington DC
4. User knowledge >> channel knowledge
● Market research
•
•
•
•
Usability testing & observation
Librarian advisory boards
End-user focus groups, surveys, etc.
Info-seeking behavior research studies
● Data analysis
•
•
•
•
COUNTER reports
Google Analytics
Moz (previously SEOMoz)
Data Salon
Los Angeles | London | New Delhi
Singapore | Washington DC
5. Discovery channels – what are they?
1. Open web search
2. Library search
3. Academic databases
Los Angeles | London | New Delhi
Singapore | Washington DC
6. Discovery channels – 3 questions
1. Who uses it? (reader / customer persona)
2. Why does it matter to SAGE?
3. How do we monitor?
Los Angeles | London | New Delhi
Singapore | Washington DC
7. 1. Open Web Search – who uses it?
● Everyone! (despite what they may say)
● Simple and user friendly
● Quantity vs. quality traffic
● Use case: quick search, new topic
Los Angeles | London | New Delhi
Singapore | Washington DC
8. Open web search – why does it matter?
● Everyone uses it (remember?)
● SEO = ROI
● Common „starter‟ channel
Los Angeles | London | New Delhi
Singapore | Washington DC
9. Open web search – how do we monitor?
SAGE Knowledge Traffic Sources
CQ Researcher Traffic Sources
● Google Analytics
● Moz
● Market research
Open web search
Library referrals
Social media
Academic
Direct / unknown
Open web search
Library referrals
Social media
Academic
Direct / unknown
Los Angeles | London | New Delhi
Singapore | Washington DC
10. 2. Library search – who uses it?
● Advanced students, faculty
● Advanced search / browse
● Use case: narrow queries, “known searches”
Los Angeles | London | New Delhi
Singapore | Washington DC
11. Library search – why does it matter?
● Capture advanced readers
● Win-win strategy
• Discovery services
• ERM feeds
• LibGuides, widgets and more!
Los Angeles | London | New Delhi
Singapore | Washington DC
12. Library search – how do we monitor?
● Google Analytics
● COUNTER – cost / use
● Usability testing
Los Angeles | London | New Delhi
Singapore | Washington DC
13. 3. Academic search – who uses it?
● Advanced students, faculty, practitioners
● “Power” users
● Use case: deep research, building expertise
Los Angeles | London | New Delhi
Singapore | Washington DC
14. Academic search – why does it matter?
● A&I
• reach experts, power users
• branding, profile, scholarly ecosystem
● Mainstream academic search
• hybrid, emerging technology
• reach wider audience, including advanced readers
Los Angeles | London | New Delhi
Singapore | Washington DC
15. Academic search – how do we monitor?
● Market research
● Google Analytics
% Total Usage
(Sep-Dec 2012)
SAGE Journals
2.6%
SAGE Knowledge
0.6%
SAGE Research
Methods
0.4%
CQ Researcher
1.5%
Los Angeles | London | New Delhi
Singapore | Washington DC
16. Thank you!
Lettie.Conrad@sagepub.com
●
●
●
Cardwell, C. et. al (2012). “Beyond simple, easy and fast.” College
& Research Libraries News, 73(6), 344-347.
http://crln.acrl.org/content/73/6/344.full
Haines, L. et al. 2010. Information-seeking behavior of basic
science researchers: Implications for library services. Journal of
the Medical Library Association, 98(1), 73-81.
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2801986/
Holman, L. (2011). Millennial students‟ mental models of search:
Implications for academic librarians and database developers.
Journal of Academic Librarianship, 37(1), 19-27.
www.sciencedirect.com/science/article/pii/S0099133310002545
Los Angeles | London | New Delhi
Singapore | Washington DC
Notas del editor
Ok, so just some quick facts about SAGE, as a foundation for my talk.We are a nearly 50-year old independent scholarly publisher with editorial offices in the US, UK, India and Singapore. We publish print and digital books, journals, reference and database products. We have a number of imprints and subsidiaries – *cue animation* such as Corwin, CQ Press, and Adam Matthew. For purposes of this talk, I’ll be focusing on the SAGE online products and databases used by your patrons within higher-educational, corporate and government libraries, and various special markets. *cue animation* Specifically, I’ll draw on discovery channels for our reference products, primarily hosted within SAGE Knowledge and CQ Press.
SAGE is dedicated to participating in cross-sector initiatives to drive discovery of online reference resources and have published a whitepaper that outlines some best practices and accomplishments in this space. I recommend you check it out – and, *cue animation* stay tuned for the next edition of this paper, due out at ALA MW in Philly, Jan 2014.
At SAGE, we take a user oriented approach to understanding if – and how – users are finding our online reference products. A functional understanding of our readers and librarian partners allows us to better understanding what discovery channels they’re using and why they matter. SAGE looks to two primary sources of information here -- Market Research and Data Analysis.Market research *cue animation* includes usability testing and observational studies; library advisory boards that span both specialty and territory; focus groups, surveys and other methods of gaining insights on our product performance and development strategies from students, practitioners, faculty, or other relevant readers; and monitoring information-seeking behavioral research in the scholarly literature.Data analysis *cue animation* which draws on a number of tools and resources including COUNTER usage reports, Google Analytics, Moz (previously SEOMoz focused on open-web performance metrics), and we’ve developed a MasterVision tool with Data Salon to illuminate data about turnaways (specifically, unauthenticated users within institutional networks)
There are 4 primary channels of online usage to SAGE’s online databases and platforms. There are multiple channels within each of these 4 categories, but I’ll mention each briefly just now. Then, throughout this talk, I’ll go into more detail about each channel. *cue animation* Open web search: any mainstream search tool (Google), excluding social search / social media referrals*cue animation* Library search: any referrals from academic / institutional websites, proxies, openurl, libguides, etc.*cue animation* Academic databases: any abstracting/indexing database or other academically oriented search tools, including mainstream academic search tools like Google Scholar or Microsoft Academic SearchYou can probably lump together different types of traffic sources or discovery resources in any number of ways. And different channel types are more or less important, depending on your mission, nature of your content, etc. Even Simon and his colleague Tracey Gardener created one classification for discovery channels in their longitudinal survey on how readers discover academic journals. But, this is the schema that we most often use internally when discussing how users find our content.
So, for each channel, I’m going to try to answer for you 3 questions – who uses it? This is where the user knowledge is critical, to understand the types of readers of our content, such as undergraduate students or scientists why does it matter to SAGE? I’ll share some high-level performance metrics and the evidence-based approach we take to priorizing our efforts on certain discovery channels based on product or content type how do we monitor? I’ll explain a bit about how we keep tabs on the performance of each channel by product
Ok, let’s start with open web search – who uses it? Well, everyone uses mainstream search tools, whether they like to admit it or not. Key players in this channel are *cue animation* Google and Wikipedia, obviously. (I should note that we do not include Google Scholar in this channel, as we consider it an academic tool.) The reasons for their popularity vary from user to user, but mainly it’s a matter of providing readers with a simple and convenient tool. A single search box. A free service. Reliable results, with answers to your questions every time. Most academic products cannot boast these traits. The fact is, an increasing number of students and readers value speed/simplicity/convenience over quality information.An important element to keep in mind with this channel is that usage numbers from open web search tools are often high – but that does not mean it is all relevant, quality traffic. Being part of Google soup means SAGE products see higher bounce rates, where unauthenticated visitors quickly retreat when they find themselves in an unknown database or platform, very often irrelevant for their needs.There are variations in HOW different readers leverage these tools for content discovery – for example, a case study at University of Baltimore showed that undergraduate students are familiar with a variety of search needs and modify their techniques as needed. Many in this study reported going to Google or other mainstream tools when doing simple keyword searches, often at the start of a project or when they’re unfamiliar with a topic. While more complex searches focused on a specific topic, where users have deeper knowledge in a topic, might be best served by other channels with advanced Boolean searches in a specialty database or library catalog. Ultimately, knowledge of these trends is critical to understanding our usage data.
So why does this usage channel matter – this one’s pretty obvious, everyone uses it. The high volume of open-web referrals and market research evidence of the prevalent use of mainstream search tools within academia makes this one a no brainer.Ultimately, competitive SEO can make or break a product. Since so many students and faculty start with Google or Wikipedia, library holdings must be discoverable via these channels. If they’re not, cost / use will quickly decline and result in cancellations or lack of sales.Also, the Googles of the world are common ‘starter’ channels for discovery as well. Our research shows that many readers will shift to GScholar or other academic database tools after conducting an initial open-web search. For example, *cue animation* this diagram shows how a single Wikipedia entry page can push traffic to numerous resources. It’s important for publishers to be creative about ensuring their authoritative content is one of those optional destinations.
So, the final question for this channel is ‘how do we monitor?’ Mainly, at the moment, we’re recording and analyzing mainstream search traffic via Google Analytics. We also use Moz, as it is focused on SEO specifically, so it allows us much greater insight into usage patterns related to the open web.SAGE teams have developed performance dashboards for all primary product lines. Product and Marketing Managers analyze this data on a regular basis, to determine trends in open-web activity. *cue animation* For example, CQ Researcher is one where we’re very focused on open-web traffic at the moment, as it’s notably lower than other products. As you can see here with data from May 2013, we have strong performance from library referrals, but we’re adjusting site design and authentication routines to achieve greater indexing in Google and Google Scholar.*cue animation* In contrast, during the same month, we see very different patterns, where more than 50% of our traffic comes from mainstream search tools. Currently, we feel this is a good balance for journals and all channels are performing well for us. We try to leverage these experiences to apply to other product strategies.
Ok, onto the second key discovery channel for SAGE products – library search. Again, this encompasses both library search tools as well as browse, research guides, widgets, faculty sites, or any other traffic engine within a customers’ institution.For the most part, our research shows that library search and browse tools are most commonly used by more advanced students – either upper level undergrads or graduate students – as well as faculty, staff and librarians. While single-search-box discovery services, like Summon, are driving younger students to the library website, SAGE readers and customers commonly report that they look to the library website for specific use cases. For example, a humanities grad student recently told us that she looks to the library website when she’s looking for full text access to a specific publication. Observational research we conducted earlier this year made it clear that the library search tools are relied on once a user has a deeper knowledge of their topic or specialty.
Ok, so why does the library search channel matter to SAGE? To start, we want to be sure we’re visible and available for those use cases I just mentioned – we want to capture faculty and advanced students when they turn to the library for help. And, our investment in SAGE content being discoverable via library search and browse resources is a win-win – we are both securing strong performance for our products and also securing our customers’ investment in SAGE products.SAGE’s investment in library search has multiple prongs – *cue animation* we invest notable resource to KBART-compliant monthly feeds of product data for ERM services*cue animation* we deliver full text metadata for indexing in all major discovery services, which is not a small feat, to say the leastAnd we develop portable tools for librarians to adopt and install on their websites – *cue animation* such as LibGuides and search widgets. This is our furstLibGuide, which we launched this summer – and others will go live on the SpringShare platform very soon!
Here again, we rely on Google Analytics to track library referrals. We set up a custom advanced segment that filters for common library website attributes (for those GA geeks in the room). We’re also looking at cost/ use and other metrics from COUNTER reports that our customers regularly monitor. Understanding their benchmarks helps us development KPIs for each product or content type.For example, the LibGuide I showed you a moment ago (*back browse to previous screen*) showcases content in SAGE Research Methods – a platform that hosts the lion-share of SAGE’s content in research methodology, mixed methods, research design, and the research process. During its first full year in the market, SAGE Research Methods struggled for a competitive usage ratio. During these last few years, we’ve been working to improve traffic via all channels – specifically library search, since that’s a favorite among advanced readers who are more likely to be conducting social science research studies. *cue animation* Comparing the fourth quarter of 2011 with the fourth quarter of 2012, we can see a steady climb in library referrals (that’s the orange line). This is moving in the right direction, improving cost / use metrics. But, it’s not quite where we’d like it to be and, therefore, we’re releasing that lovely LibGuide I mentioned very soon.
Ok, onto the 3rd important discovery channel for SAGE – academic or A&I search.Like library search, we find that advanced SAGE readers seek out academic and discipline specialty tools, like A&I databases – graduate students, faculty, practitioners, and various experts in their fields. Academic search providers *cue animation* (of which there are dozens, these logos are just representative) -- they often find they have loyal users who rely on advanced features, like Boolean searches and filters. For publishers, delivering high-quality, enriched metadata is a must in order to be discoverable by the right power users of academic databases and A&I products. Building brand loyalty with these experts is key for primary and secondary publishers in this space.
Ok – why does this one matter? Well, it depends on the type of channel within this category. For A&I and classic academic databases, it’s about reaching advanced researchers and harvesting usage from a refined group of users. Within the larger scholarly ecosystem, it’s about branding and profile among those expert, power users, who are leaders in their disciplines. ‘Why it matters’ also differs across the disciplines. For example, getting journal content into A&I products is a much higher priorities for humanities publications, while the hard sciences are focused on many more channels.But, you noticed Microsoft Academic Search and Google Scholar are on that previous slide (*back browse*). These two constitute a game-changer for publishers in the academic search channel. As open-web / academic hybrids, a great deal of care goes into comprehensive scholarly indexing of relevant, authoritative content with unique search algorithms and a very clean, simple user experience.
Ok, so how do we monitor these two elements of the academic search channel – we continually monitor what students, faculty, librarians, and other readers tell us about their academic search habits and preferences.Of course, we’re also leveraging Google Analytics reports with custom advanced segments that filter for elements of referring addresses that are germane to A&I and academic traffic. Generally, the activity is low, relative to our total usage.Here, *cue animation* usage stats from classic A&I databases and hybrid open-web academic search tools are lumped together. We’re seeing very low overall numbers – even for journals and CQ Researcher (a serial product), which are often more easily indexed. SAGE Journals is the only product where we are indexed in GScholar and Microsoft Academic Search. And SRM and SK are largely HSS ebooks, which are typically not indexed in A&I databases. If we have time, I can go into a bit more detail about why that is…But the other important element of this channel is the classic A&I products, which, as I noted, have a specialty audience and therefore do not constitute high volumes of traffic. However, they are important users and constitute valuable traffic – often paid traffic (vs. the often large numbers of unauthenticated, unpaid traffic from open web tools).
And that’s it – thank you for listening!Here’s my email and citations for a few resources I used for this presentation, in case you’re interested in reading more.Any questions?