LinkedIn emplea cookies para mejorar la funcionalidad y el rendimiento de nuestro sitio web, así como para ofrecer publicidad relevante. Si continúas navegando por ese sitio web, aceptas el uso de cookies. Consulta nuestras Condiciones de uso y nuestra Política de privacidad para más información.
LinkedIn emplea cookies para mejorar la funcionalidad y el rendimiento de nuestro sitio web, así como para ofrecer publicidad relevante. Si continúas navegando por ese sitio web, aceptas el uso de cookies. Consulta nuestra Política de privacidad y nuestras Condiciones de uso para más información.
Welcome to my lightning talk ‘Listening to library data’Many of you know me but for those that don’t, my name is Katie Legere and I’m systems coordinator at the Queen’s University Library in Kingston. I have somewhere around 7 years of library experience – 5 years as developer at a public library and a year and a bit in an academic library. I also have around 10 years of computing experience as well as an undergrad and masters in computer science .But what I really have a lot of is experience in music – over 35 years studying and performing and thinking about various kinds of music. So, it really isn’t surprising that when I get a chance to think about a way to combine these things, I get pretty excited. Today I want to tell you about an emerging field in data representation and a little project I did at my library.
Librarians love statistics and no wonder! But, while access to information is important for planning and evaluating what we do, the amazing advances in information technology have enabled an explosion in the sheer amount of data available (Badrakhan 2010). Creating new ways of understanding this data has become an important field of research
The field of visualization is a rapidly growing one these days. Even data that is pretty easy to understand by looking at the numbers can be made clearer and more engaging by using familiar tools like graphs and pie charts.
As the sheer amount of data grows so do our ways of visualizing it using things like tag clouds, maps and infographics.Visualization is increasingly a hot topic at conferences.
The concept of using sound or ‘sonification’ to interpret large amounts of data in real time is a relatively recent one though.Sonification is most commonly defined as the “use of non-speech audio to convey information” (Hermann and Ritter 1999), The concept of gathering information through sound is hardly new. Humans and our animal friends have been using our ears as well as our eyes and noses as long as we’ve been around.And it’s unlikely that we would have survived too long if we could not hear the snap of a twig behind us in the forest and decide that running away or climbing a tree might be in order. Sonification takes advantage of people’s innate ability to detect subtle differences in sounds and perceive cycles, rhythms, patterns and short events by listening, even allowing data to be monitored while the listener is doing something else.
And indeed using sound to gather information is not new. Medical practitioners have been stethoscopes as a normal part of their equipment for hundreds of years to help diagnose dangerous conditions (Barrass and Kramer 1999). Geiger counters measure radiation levels and transmit information both through a visual interface and audibly through clicks (Hunt 2011). And the ping of sonar is a familiar sound to many from watching moviesAs the amount of information available to organizations increases, new ways of analyzing and understanding the data must also appear in order that meaningful information can be drawn from it. Sonification also allows for adapting the way users interact with information (Diaz-Merced et al. 2012) increasing accessibility by employing our highly developed hearing as an option as well as a complement to visualization techniques in understanding data.
The usual approach to the representation of data as sound is through parameter mapping. Data elements are mapped to particular elements of sound such as pitch, duration, volume, and timbre and what entered as a stream of numbers emerges as sound.
For examples – Etna VolcanoThe MusicaInaudita sound laboratory of the University of Salerno (Italy), in collaboration with the Catania INFN Section and the Technologies and Research for Contemporary Arts has a project which sonifies Geophysical data collected by a digital seismograph placed on the Etna volcano in Catania (Italy). They maintain that sonic representations are particularly useful when dealing with complex, high-dimensional data, or in data monitoring tasks where it is practically impossible to use the visual inspection and are interested in exploring the possibility of describing patterns or trends, through sound, which were hardly perceivable otherwise.
Using the scientific graph data that the ATLAS experiment presented on 4th July, scientists at Cambridge created a sonification algorithm which offers the same qualitative and quantitative information contained in the graph , only translated into notes. They mapped the numbers to the notes using two principles:1. the same number is associated to the same note2. the melody is “covariant” with the data, i.e. the melody changes following exactly the same profile of the scientific data, exactly as shown in the attached picture.
Using the same sort of data mapping process, I set out to see what our library reference statistics might tell us.
We use a slightly modified version of libstats to gather reference information from the various libraries and store all of this in a mySql database.
So it was fairly straightforward to extract the data and do some mapping.Each library was mapped to an instrument… and when one hears the final product it is easy to identify the libraries where there is a lot of activity because you hear that particular instrument.
Each of the question types was mapped to a melodic fragment. And the date of the question was mapped to a bar number – each ‘new day’ became a new bar number and any duplications on the same day increased the volume as well.So, in the same way as the library to instrument mapping, when one hears the final product it is easy to identify the kinds of questions that recur from the repetition of the melodic fragment.
The result is a musical score (using finale as the notation software) which, conceivably could be performed but is easily exported to an mp3 file to listen to… and here’s a fragment.
Clearly I could go a great deal further in the project but it was definitely an interesting exercise and, I think that sonification offers us an interesting new way of looking at data. Are there any questions?
Listening to the library
LISTENING TO LIBRARY DATAKatie LegereCode4lib north 2013