Linked data, RDA, and shelf ready processing are relatively recent developments in a long evolution of library technology, metadata standards, and technical services workflows. Although change has been a constant fixture of the cataloger's reality, change is nonetheless disruptive—sometimes, bridges burn. This session takes a historical view of cataloging and metadata creation from the time of Cutter to the dawn of semantic search. The evolution and interplay of technology, metadata standards, and workflows—the tools of our trade—will be considered. What were the roles of catalogers during times of transition? Which personal and professional strengths have proven invaluable over the last century? How does any of this help our community interpret developments in linked library data or user-centered resource discovery? The presenter will propose a framework for interpreting changes in library technology, metadata standards, and technical services workflows. By viewing such changes through the lens of cataloging competencies, our community might navigate into new territory and cooperate in the building of new bridges.
Advanced Views - Calendar View in Odoo 17Celine George
Más contenido relacionado
Similar a "We'll burn that bridge when we get to it”—Technology, Metadata Standards, and Workflows in Flux: Competency as a Roadmap through Uncertain Territory
Moving Library Metadata Toward Linked Data: Opportunities Provided by the eX...Jennifer Bowen
Similar a "We'll burn that bridge when we get to it”—Technology, Metadata Standards, and Workflows in Flux: Competency as a Roadmap through Uncertain Territory (20)
"We'll burn that bridge when we get to it”—Technology, Metadata Standards, and Workflows in Flux: Competency as a Roadmap through Uncertain Territory
1. Jennifer A. Liss
@cursedstorm #OVGTSL16
technology, metadata standards, and workflows in flux:
competency as a roadmap through uncertain territory
26 May 2016
Ohio Valley Group of Technical Services Librarians Annual Conference
We'll burn that bridge
when we get to it—
2. 100 1# Liss, Jennifer A., $e author. $0 (orcid)0000000336414427
245 10 We’ll burn that bridge when we get to it : $b technology,
metadata standards, and workflows in flux : competency as a
roadmap through uncertain territory / $c Jennifer A. Liss.
300 1 presentation (49 slides) : $b color illustrations
504 Includes bibliographical references.
650 #0 Catalogers.
650 #0 Core competencies.
650 #0 Cataloging $x History.
650 #0 Organizational change $x Management.
710 20 Ohio Valley Group of Technical Services Librarians. $b Annual
Conference $d (2016 : $c Louisville, Ky.)
2@cursedstorm #OVGTSL16
8. What are core competencies?
Core competencies are the collective
learning in the organization,
especially how to coordinate diverse
production skills and integrate
multiple streams of technologies.
—Prahalad and Hamel (1990)
8@cursedstorm #OVGTSL16
9. What are core competencies?
9@cursedstorm #OVGTSL16
...aggregates of capabilities,
where synergy is created that has
sustainable value and broad
applicability.
—Gallon and Stillman (1995)
24. @cursedstorm #OVGTSL16 24
CORE COMPETENCIES FOR WORKING IN
CARD CATALOG
TOOLS
Formulates consistent data in accordance with published
content standards
ALA
Collocates/disambiguates creators, contributors, and
titles/series through use of data value standards
ALA
NAF
Analyzes resources for subject and form/genre access
through use of data value standards
LCSH
Classifies library resources by subject area to facilitate
access/browse
DDC
LCC
OTHER: searching; facility with reference sources
43. 43@cursedstorm #OVGTSL16
The web is a different place…
AAA Anyone can say
Anything about Any
topic
Open world Some statements
haven’t been said yet
44. 44@cursedstorm #OVGTSL16
World WILD Web
AAA Anyone can say
Anything about Any
topic
Open world Some statements
haven’t been said yet
Nonunique
naming
An entity may be known
by more than one name
45. @cursedstorm #OVGTSL16 45
CORE COMPETENCIES
NETWORKED
TERMINAL
OPAC
LINKED
LIBR. DATA
Formulates consistent data (ALA, AACR)
Collocates/disambiguates creators,
contributors, titles/series (AACR, NAF)
Analyzes resources for subject and
form/genre access (LCSH)
Classifies library resources by subject
area (DDC, LCC)
Adds visual elements for printing (ISBD)
Encodes machine-actionable data (MARC)
Encodes relationships between creators,
works, etc. (RDA, BIBFRAME)
47. 47
Image credits
Slide 1 [hands holding map] by Sylvia Bartyzel
Slide 3 More Bridge Work by Thomas Hawk
Slide 4 Kanincheneule by Martin Teschner
Slide 5 Hierarchy of Competence, Igor Kokcharov
Slide 6 WAVE Typing, Indiana University Photographic Services
Slide 7 WAVES Typing Class, Indiana University Photographic Services
Slide 12 Square-peg-round-hole-21, Yoel Ben-Avraham
Slide 14 [driving on dirt road], Forrest Cavale
Slide 15 [field notes], Helloquence
Slide 20 People working in Card Division, Library of Congress, Washington, D.C.
Slide 22 Students using card catalog, IU News Bureau
Slide 26 Beehive Model 105 terminal from 1978, Betsy Butler
Slide 30 Information Online Celebration, IU New Bureau
Slide 33 linked data, Elco van Staveren
@cursedstorm #OVGTSL16
48. @cursedstorm #OVGTSL16 48
Allemang, D., & Handler, J. (2011). Semantic web for the working ontologist:
Effective modeling in RDFS and OWL (2nd end.). Waltham, MA : Morgan
Kaufmann Publishers.
Cutter, Charles A. (1904). Rules for a Dictionary Catalog (4th ed.). Washington,
DC: Government Printing Office.
Gallon, M.R., & Stillman, H.M. (1995 May-June). Putting core competency
thinking into practice. Research Technology Management 38(3), 20-28.
Prahalad, C.K., & Hamel, G. (1990 May-June). The core competence of the
corporation. Harvard Business Review, 79-91.
Suggested reading
49. 49@cursedstorm #OVGTSL16
Thank
you!
Jennifer A. Liss
Head, Monographic Image Cataloging
Indiana University, Bloomington Libraries
jaliss@indiana.edu
0000-0003-3641-4427
This work is licensed under a Creative Commons
Attribution 4.0 International License.
Find these slides:
jliss.net
Notas del editor
My name is Jennifer and I am a cataloger. Who else is a cataloger? Cataloging sympathizers—database or systems managers? Acquisitions? People who manage catalogers? Thank you, all, for being here.
Cataloger cred! #meta
My goal is to accomplish two things in the next 30 minutes. One, is to offer an alternative way of communicating what catalogers bring to an organization. The other is to start a discussion about what might be in store for catalogers in the next ten years—and how core competencies might help us navigate those changes?
I promised a historical tour of cataloging but first, we need to talk about core competencies.
Competence is the ability to do something successfully, right? Maybe you’ve seen a pyramid chart like this one, which helps us understand competence in an individual.
I’m not going to talk about competencies as they apply to the individual, per se; instead, I'm going to focus on competencies collectively, as they apply to cataloging.
In the business world, when you insert the word "core" before the word "competency," the phrase means something specific. Core competencies refer to a set of capabilities of a group of people working toward a shared goal.
In their 1990 Harvard Business Review article, Prahalad and Hamel define core competencies as "the collective learning in the organization, especially how to coordinate diverse production skills and integrate multiple streams of technologies."
In 1995, Gallon and Stillman define core competencies as "aggregates of capabilities, where synergy is created that has sustainable value and broad applicability.”
From these definitions, we can see that core competencies are carefully aligned to an institution's mission statement, core values, and strategic initiatives. So how does one spot a core competency in the wild?
Prahalad and Hamel offer three tests for identifying core competencies:
Does the core competency provide long term strategic advantage?
Does the core competency contribute to customer benefit?
Is the core competency difficult for others to imitate? This last test is a particularly helpful in measuring how well a company (or library) can compete in the open market.
Prahalad and Hamel give Sony as an example. Miniaturization is Sony's thing. That is what they do well and for decades, they did it cheaper than any other company. I'm sure Sony has core competencies like customer focus and adaptability on their list, but miniaturization is the core competency to which Sony is dedicating significant resources for hiring, training, and research and development. This core competency keeps Sony competitive.
Why bother identifying core competencies for cataloging?
Strategic planning—reviewing your cataloging operations and seeing how they fit into your libraries' short and long-term strategic initiatives; this is an ACTIVE stance—you're anticipating new opportunities
Education—includes workplace training, continuing education (perhaps including library school), and professional development
Recruitment—core competencies help inform the designing of a new or reimagined position (again, thinking strategically) and writing the job ad; vetting applicants and hiring decisions
Advocacy—communicating how the unique strengths that catalogers bring to an organization contributes to the success of the library
Assessment—evaluating the effectiveness of your metadata services
These last two tie in well with Rebecca Mugridge’s keynote yesterday. Rebecca posited that assessment is an important tool for advocacy—I’d assert that cataloging core competencies will help you focus your advocacy efforts.
Quick pause here—any questions?
We're about to embark on the cataloging history tour! Your job on this expedition: find the core competency.
Sometimes core competencies are confused with the tools that are used to get the work done.
Just to review, catalogers have three kinds of tools at their disposal—technologies, standards, and workflows.
Technology is the collection of techniques, skills, and methods used in creating and manipulating metadata; one example we won’t talk about today is the codex; dictionary catalogs first appeared in book form
Standards are all of the rules and guidelines we use for formulating and structuring our data—this includes all of those acronyms and initialisms you see listed in job ads for catalogers, MARC, RDA, etc.
Workflows are the procedures we set in place in order to get our work done. Card filing and shelf ready processing are components of a cataloging workflow.
On our cataloging history tour, we'll see that cataloging technology, standards, and workflows influence one another. Something to be thinking about, as we travel along: do changes in cataloging tools necessitate a change in cataloging core competencies?
It is 1901. Library of Congress has launched its printed card service for all libraries. This is a big deal. Charles Cutter talks about the LC card service in the preface to the fourth edition of Rules for a Dictionary Catalog (1st edition was published in 1876).
Paraphrasing, he says that he doubted it was worth preparing a fourth edition of his Dictionary Catalog, in light of the "great success of the Library of Congress cataloging." Cutter goes on to say that he realized that it would take some time for all libraries to switch to using LC cards and that it would take a long time before LC could furnish cards for all books—long enough that libraries would have to do a portion of their cataloging themselves, hence, the publication of the 4th edition in 1904.
Cutter seemed to believe that LC was going to be able to issue all of the cards that all American libraries could ever need. Astounding! Here's the heartbreaking part—and keep in mind that Cutter died not long after writing these words (he never saw his 4th edition in print):
"Still I can not help thinking that the golden age of cataloging is over, and that the difficulties and discussions which have furnished an innocent pleasure to so many will interest them no more. Another lost art.”
Moving to this new technology—the card catalog—is beginning to look really attractive for mid- and small-sized libraries because the Library of Congress has put in place the means for a cost-saving cataloging workflow (the LC printed card service). Cutter took a look around at this landscape and said, it's over for cataloging. Everybody go home.
Cutter couldn't have known what was in store—changes to the publishing industry, the birth of the web—but I wish Cutter could have peered ahead 113 years to see us all sitting here, talking about cataloging. If so great a mind could be blinded by the changes materializing around him… perhaps we are justified in being skeptical when yet another person announces the death of cataloging.
The card catalog remained a staple of library technology for the bulk of the 20th century. Let's think about the landscape in the 1950s. There were card catalogs for authors, titles, and subjects. MARC and OCLC aren't around yet, so libraries were ordering cards sets from the Library of Congress and were doing original cataloging in house. ALA was the content standard in use.
If you were a cataloger in this environment, what core competencies would you need?
To begin we might brainstorm: how were catalogers getting their work done in the 1950sr? Technologies, standards, and workflows.
Copy cataloging was slowly becoming a workflow—need for verifying whether LC had a card you could use. Eventually, library cataloging operations reorganized behind the concept of copy and original cataloging.
In this table, I’ve mapped tools we identified on the right, to cataloging core competencies in the left column.
Notice that when core competencies are well written, they tend not to mention specific technologies or standards. Those core competencies would become outdated rather quickly. Thinking in high-level terms also helps us think more expansively about the work we’re good at.
MARC enters the scene in 1965 but in practical terms, the machine-readable catalogsis still a dream for libraries.
Fast forward to the mid-1970s: Jimmy Carter is president, Stephen King just published The Stand, ALA publishes AACR1, and networked catalog terminals are beginning to appear in large libraries.
The terminal pictured here is Model 105 made by Beehive Medical Electronics, Inc. The first of the “Beehive” terminals came off the production line in March 1978 and they cost $3,700 each (adjusted for inflation, that would be over $14,000 today).
This particular technological shift, for those who could afford it, had a significant impact on workflows. If organization charts hadn’t already begun to show units dedicated to copy cataloging or typing, then they were likely beginning to do so now.
How did cataloging core competencies change in the terminal catalog environment?
In this table, I’ve reproduced the competencies relevant to cataloging in a card catalog environment and added a column to track whether these competencies are relevant in the networked terminal environment.
How do competencies compare in the card catalog environment versus the networked terminal?
Are any competencies missing?
Here’s the table I came up with. Note the addition of “Encodes machine-actionable data” to accommodate for catalogers needing to structure data for machines to read. In this case, the implementation of technology and standards brought about a new core competency. Standardized punctuation (ISBD) also emerged at this time, to help normalize display for humans.
Jumping forward to the late 20th century, library catalogs are becoming accessible online!
This is a press photo from the launch event for Indiana University’s online catalog in January 1990.
With an online catalog, came big changes in workflows—goodbye card printing! By 1990, AACR2 has been published.
Have competencies changed now that catalogs are online?
Workflows change considerably! Cataloging work shifts to being done in applications installed on computers. Skills we largely take for granted now (proficiency with the Windows operating system OS), etc.) and are still new to many cataloging staff in the 1990s. The division of labor is distinct by now: copy cataloging and original cataloging.
Computer proficiency could be seen as a core competency but I’m not convinced that it passes the third test (difficult for others to imitate), since ability to use an OS and a word processor are skills that are becoming required for most kinds of clerical jobs).
Enter the 21st century. What’s changed since library catalogs first went online?
The web is here to stay. That thing called Web 2.0 happens (social media, etc.). OCLC database grows tremendously, bringing duplicate records in addition to records from international databases. Discovery layers start to appear along with faceted searching and browsing. Publishing and acquisitions models evolve rapidly.
IFLA began working on the FRBR theoretical data model around 1992. It wasn’t published until 2000.
Day one implementation for RDA was in March 2013.
MARC formats for bibliographic and authority records continue to evolve and lots of new fields and subfields are defined as a result of RDA implementation.
What new competencies arose?
RDA forced us to begin thinking about relationships—relator terms for creators and contributors; recording related titles (e.g., the title I’m cataloging is translation or adaptation of X). Identifiers are crucial for in a linked data environment. Authority records are becoming much richer, thanks again, to RDA.
‘Disambiguates creators’ should get two check marks in the linked library data column. Undifferentiated name authority records have to go! Linked library data doesn’t work without an authority file.
In 2009, LC began releasing their controlled vocabularies (NAF, LCSH) via their Linked Data Service (id.loc.gov). Over time, they’ve added more vocabularies. In doing so, LC has taken the critical first leap toward linked data.
Tim Berners-Lee defined 5 stars for open data. The Library of Congress is at 4 stars now. 4 stars is using URIs to refer to things, so that people can point to your stuff.
The 5 star level is “link your data to other data to provide context.” How does LC plan to get to achieve 5 stars?
To accomplish this goal, LC decided to create a new metadata structure standard, BIBFRAME.
Rather than talk about the specifics of BIBFRAME, I want to focus on the environment in which libraries hopes to participate by implementing BIBFRAME—the semantic web.
Use Facebook? Wikipedia? Google? Then you are already benefiting from the semantic web. And libraries hope to plug our data into the web not just to help people find our resources more easily, but to free our data for reuse by whomever wants it.
But if we want our data to play well on the web, we have to understand the rules of the web.
I’ll admit that haven’t made it that far into the Semantic Web for the Working Ontologist by Dean Allemang and Jim Hendler, but I highly recommend the first two chapters. The prose is straightforward and the examples are easy to understand.
In the first chapter, the authors talk about the fundamental assumptions upon which the web operates.
The first fundamental assumption, is the AAA principle, or Anyone can say Anything about Any topic. Documents on the web often disagree, yes? There are many reasons for this. Sometimes opinions simply differ: is the dress blue and black or is it white and gold? Sometimes, people make mistakes. Sometimes information is out of date. Sometimes, people intentionally publish misleading information.
As a result of the AAA condition, there is a persistent possibility that someone will say something new. There is always something more to be known. Said another way, there may be information missing. Servers may become unavailable—unique information that cannot be found anywhere else may be lost in a drive failure.
If you are working in an environment of distributed semantic data, then you have to accept that the conclusions drawn by the machine may not be taking into account ALL available data.
A third underlying assumption of the web is that of nonunique naming. Communities sometimes don’t agree what a thing should be called. The LCSH term Racetracks (Horse) maps to the term “Racetracks” in TGM.
The labels we assign to entities may be expressed in other languages. Likewise, multiple identifiers may exist for a single entity (IDs in VIAF, ISNI, OCRID, etc.).
This environment of the world wide (wild) world is very different that than the contained silos that we currently work in now. The RDF model and the semantic web languages are built to account for the wildness of the web. In the language of the semantic web, there isn’t any way to validate semantic statements, the way we validate XML documents.
The standards are built to handle machine inferencing (think artificial intelligence, Skynet kinds of stuff).
The semantic web technology we wish to use is fundamentally different. But what does that mean for us?
Continue moving in the direction that RDA has set us (expressing relationships between things)
For metadata and/or technology librarians, a “basic understanding of AI inferencing” as they apply to linked data languages (OWL, SKOS, etc.) seems to be relevant.
From here on out I can’t claim to have any answers, only educated guesses. Data reuse is the name of the semantic web game. Data remediation will become critical—for new stuff coming in as well as cleaning up legacy.
Think and talk about your work expansively. Help others understand that you solve riddles and problems daily and that your training have given you the tools you need to cope with ambiguity. Lead with your strengths (i.e., your core competencies).