Creating quality standards for scientific content in digital environments through the development...
1. DEVELOPMENT OF A UTILITY MODEL FOR CREATING
QUALITY STANDARDS IN DIGITAL ENVIRONMENTS:
DESIGN AND IMPLEMENTATION OF ALGORITHMS FOR
THE METRIC, SELECTION AND EVALUATION OF
SCIENTIFIC CONTENT
Almudena Mangas-Vega, Spain, University of Salamanca,
almumvega@usal.es
José Antonio Cordón García, University of Salamanca,
jcordon@usal.es
Raquel Gómez Díaz, University of Salamanca, rgomez@usal.es
TEEM’1
6
E-lectra: Grupo de Investigación sobre Edición Electrónica y Lecto-escritura Digital. Universidad de
Salamanca (España). electra.usal.es
2. TEEM’1
6
E-lectra: Grupo de Investigación sobre Edición Electrónica y Lecto-escritura Digital. Universidad de Salamanca (España). electra.usal.esE-lectra: Grupo de Investigación sobre Edición Electrónica y Lecto-escritura Digital. Universidad de Salamanca (España). electra.usal.es
Introduction
2
Scientific
production
Qualifications Possibilities of Teaching
Rankings of
universities
Resources
Funding
Prestige
3. TEEM’1
6
E-lectra: Grupo de Investigación sobre Edición Electrónica y Lecto-escritura Digital. Universidad de Salamanca (España). electra.usal.esE-lectra: Grupo de Investigación sobre Edición Electrónica y Lecto-escritura Digital. Universidad de Salamanca (España). electra.usal.es
Introduction
3
Scientific
production
Qualifications Possibilities of Teaching
Rankings of
universities
Resources
Funding
Prestige
Journals
Proceedings
Monographs
4. TEEM’1
6
E-lectra: Grupo de Investigación sobre Edición Electrónica y Lecto-escritura Digital. Universidad de Salamanca (España). electra.usal.es
Introduction (II)
4
Electronic publishing
Growth
Facilitates the assessment
Lack of a system for quality assessment
Systemized
Easy to apply
5. TEEM’1
6
E-lectra: Grupo de Investigación sobre Edición Electrónica y Lecto-escritura Digital. Universidad de Salamanca (España). electra.usal.es
Working hypothesis and principal objectives sought
5
• Quality
guideline
(KDP)
Amazon
• The Digital
Book Awards
Digital Book
World
• Book Citation
Index
Thomson
Reuters
• Book Titles
Expansion
Elsevier
• DOI chapters
of scientific
monographs
CrossRef
• Scholarly
Publishers
Indicators in
Humanities &
Social Sciences
EPUC
• UNESCO
• E-LECTRA
• UNE
• REBIUN
• FECYT
• ANECA
Others
6. TEEM’1
6
E-lectra: Grupo de Investigación sobre Edición Electrónica y Lecto-escritura Digital. Universidad de Salamanca (España). electra.usal.es
Working hypothesis and principal objectives sought (II)
6
Objectives:
Quality indicators in
the current tools
current landscape
Automatized Indicators
Prototype
Review / study
Define
Design
7. TEEM’1
6
E-lectra: Grupo de Investigación sobre Edición Electrónica y Lecto-escritura Digital. Universidad de Salamanca (España). electra.usal.es
Metodology
7
Literature review (monographs, journals and conference proceedings).
Analysis of the current landscape of scientific publication
Definition of a set of indicators for the assessment of scientific monographs.
Development of a prototype for automated analysis procedures and
assessment to apply them in different contexts.
Study of the possibilities of inclusion of the prototype in any of the current
tools. Or design a new tool or an executable app
8. TEEM’1
6
E-lectra: Grupo de Investigación sobre Edición Electrónica y Lecto-escritura Digital. Universidad de Salamanca (España). electra.usal.es
Metodology (II)
8
• Articles
• Manuals
• Standards
• …
Bibliographic
Review
• Publishers
• Reposities
• Libraries
Interviews with
experts •Bibliometric studies
•Network analysis
•Comparison of
platforms
Current landscape (self-
publishing,…)
Defining a framework to test the system of indicators defined
9. TEEM’1
6
E-lectra: Grupo de Investigación sobre Edición Electrónica y Lecto-escritura Digital. Universidad de Salamanca (España). electra.usal.es
Results to date and discussion
A system of quality assessment for monographs (>30 indicators)
A broader view of the world of publishing:
necessary to study of the surrounding scientific monographs before tackling the
development of a model
variety of agents involved = various approaches & consensus
self-publishing influence
Little static environment = limit is necessary
The time is now
9
10. TEEM’1
6
E-lectra: Grupo de Investigación sobre Edición Electrónica y Lecto-escritura Digital. Universidad de Salamanca (España). electra.usal.es
Current and expected contributions
Tests results
Automation of the indicators defined
Study the possibilities of implementation in current tools
Prototype design
10
UTILITY MODEL FOR CREATING QUALITY STANDARDS IN DIGITAL
ENVIRONMENTS
11. Thank You
DEVELOPMENT OF A UTILITY MODEL FOR CREATING
QUALITY STANDARDS IN DIGITAL ENVIRONMENTS:
DESIGN AND IMPLEMENTATION OF ALGORITHMS FOR
THE METRIC, SELECTION AND EVALUATION OF
SCIENTIFIC CONTENT
Almudena Mangas-Vega, Spain, University of Salamanca,
almumvega@usal.es
José Antonio Cordón García, University of Salamanca,
jcordon@usal.es
Raquel Gómez Díaz, University of Salamanca, rgomez@usal.es
TEEM’1
6
E-lectra: Grupo de Investigación sobre Edición Electrónica y Lecto-escritura Digital. Universidad de
Salamanca (España). electra.usal.es
Notas del editor
The assessment of the scientific production derives qualifications, possibilities of teaching (such as the evaluation criteria of the ANECA in Spain), rankings of importance (such as universities rankings), provision of resources, funding, prestige, …
All this can modify the science, technology and innovation in the country.
Therefore, that assessment should be as fair as possible.
That assessment should take into account the singularities of each area of knowledge and its channels of transmission
In theory, for the purpose of scientometrics, all formal channels of scientific production have the same value as it must be subjected to a control (at least they must have a peer review),
but in practice, current tools for science assessment, based mainly on citation count, seem to have focused their attention on scientific journals, relegating to the background other publications.
Perhaps this difference is generated because scientific journals have some evaluation processes and metrics studied, established and recognized for years, something that scientific monographs don’t have. So, the lack of more scientific monographs in these databases can attend simply to the lack of a system of quality assessment systemized and relatively easy to apply as in the case of serials.
This situation is even more urgent when taking into account the electronic publishing, which in recent years has had a remarkable growth and that, a priori, facilitates the work of the assessment in the form as it is currently performed
Starting with the review and analysis of the work and current initiatives, and noticing that although there are initiatives that try to achieve a certain degree of quality in monographs, such as the quality guideline of Kindle Direct Publishing (Amazon) or awards such as The Digital Book Awards (Digital Book World), are reduced almost entirely to assess how the text is presented, not the content nor the functionality. Nor are taken into account the characteristics of the books digitally created from the beginning, and something that is important for this work, these initiatives are being addressed for works of fiction, no quality criteria for scientific monographs of any kind or format is being implemented.
The problem, then, is the lack of method and clear indicators to measure the national or international prestige of publishers or editors, [...] There is a lack of clear and measurable indicators to establish the quality, design and visibility of an editorial [1]. If something is certain, it is that the number of citations is the assessable criteria and it is evaluated always, easily in the case of journals, but far more complicated in the case of monographs. The lack of measurable indicators for scientific publishers mentioned above often causes inconsistencies and shortcomings.
There are precedents to try to resolve these shortcomings: Book Citation Index (BCI) of Thomson Reuters, Book Titles Expansion (Elsevier), Global Bowker's Books in Print, Index Translationum (UNESCO) or Google Books but all have biases or shortcomings. Google Scholar partially palliates the linguistic bias, but has limitation on the number of publications analyzed for the total of the author and sometimes certain failures in its evaluation system have been discovered [4].
Another significant development is that since 2010 a CrossRef DOI is attributed to chapters of scientific monographs, which favors inclusion in citation count systems and evaluation according to certain indicators so far relegated the journals.
Also Recognized Research Group E-LECTRA: Edición electrónica y Lecto-escritura Digital pointed out the need of criteria to assess scientific publishers [1]. According to these authors, assessment of scientific publishers goes through assessing their monographs, and the assessment of these by the creation and application of indicators that publishers have to know in advance to take into account in their editions.
The EPUC group (Evaluación de Publicaciones Científicas) has submitted a proposal establishing a ranking of publishers of monographs of Humanities and Social Sciences based on the perception of specialists; this reflects an innovative and useful point of view, but it lacks objectivity and has no system of indicators that constitute a reference.
Other professional networks and organizations like REBIUN, UNE, FECYT or ANECA and Doctoral programs such as that of the University of Salamanca "Formación en la Sociedad del Conocimiento" have also highlighted, through meetings and publications, the nature of a increasingly evident problem, that impact on the perception of the researchers and the institutions where they work.
In addition, in order to generate any evaluation system in the current landscape of scientific publishing, it must be taken into account the technical needs of publishers or the needs of the resources where the information will be hosted, etc. It is also necessary to establish criteria and indicators that address those needs [5]. And generate, as far as possible a multifunctional tool for all kinds of scientific publication. Si bien el fallo encontrado no se refiere explícitamente a la evaluación de monografías, es un dato importante a tener en cuenta para futuros análisis.
Therefore, the objectives of this work are:
Review the use of quality indicators in the current assessment tools for publications and electronic resources
Study the current landscape of publication of scientific papers
Define and automate a list of valid indicators studying the current tools to implement them.
Prototype design: create new tools to use these indicators and to disseminate them.
1. Review the use of quality indicators in the current tools for evaluating publications and electronic resources
2. Study the current landscape of publication of scientific monographs from different viewpoints (author, editor, repositories, institutions, indicators, ...)
3. Define a list of valid indicators. Define their context of the application. Perform test to distinguish potential errors and correct them.
4. Automating the indicators studied
a. Analyze whether the present tools are implementing some of these indicators correctly.
b. Analyze whether they could be implemented in a software tool and what is necessary to do so.
5. Prototype design:
to. Create new tools to use these indicators.
b. Disseminate them to all those specialists involved in the creation, edition, dissemination and evaluation of scientific monographs.
The methodology to carry out, will have the following steps:
- Literature review about current quality indicators for evaluating scientific works (monographs, journals and conference proceedings).
- Analysis of the current landscape of scientific publication through the literature review and from interviews with experts.
- Definition of a set of indicators for the assessment of scientific papers.
- Development of a prototype for automated analysis procedures and assessment to apply them in different contexts.
- Study of the possibilities of inclusion of the prototype in any of the current tools. Or design a new toolor an executable app.
In addition to the already mentioned methodology, sometimes it has been necessary developing explicit methodology to address a specific study in any of the phases of the investigation. An example of this is the methodology created to study the framework of implementation of an initial set of indicators for evaluating scientific monographs, taking into account the current environment of scientific and academic production, comprising elements such as self-publishing or repositories:
- Analysis of the needs, demands and requirements of the agents involved in the current landscape of the publication of researches.
a. From the point of view of the (mostly university) scientific publishers.
b. From the point of view of specialists and managers of (mostly university) scientific repositories.
c. From the point of view of the specialized scientific libraries (again, mostly university).
d. Sharing the views of all of them about different indicators of evaluation of scientific monographs and the viability of using them in their application environments.
- Analysis of the self-publishing phenomenon from two different perspectives.
a. From the point of view of research that this phenomenon is generating and its application trends through a bibliometric study of work thereon along with a social network analysis applied to the subject, which involved the use of different search strategies of analysis and data representation [11].
b. Comparision of different self-publishing platforms existing and the study of the possibilities of applying them for scientific publication as a preliminary step for system quality assessment of monographs [12].
- Defining a framework to generate a system of indicators defined, systematized and updated for the assessment of scientific monographs in the current context.
In addition to having a system of quality assessment for monographs consisting of, currently, more than 30 indicators, the current work has generated other interesting results:
The development of the research done to date has allowed to have a broader view of the world of publishing, including digital publishing, in its current ecosystem [13], with particular emphasis on self-publishing systems and how current models may affect the quality of the works and their possible evaluation. From all this, the following conclusions have been obtained:
- The peculiarities of the current moment and the environment surrounding scientific monographs and the possibilities of editing / publishing makes it necessary a thorough study of the situation before tackling the development of a model for quality assessment of these monographs.
- The variety of agents involved in the dissemination of scientific information, especially in the case of scientific monographs (authors, creators, publishers, distributors, programmers, managers of software, documentarians, repositories managers, librarians ...) forces to have present various approaches to the study of the implementation framework, but also concerning the list of indicators to use. Moreover, the consensus- building among the different agents is necessary, at this time of profound change, to outline a standardized system that would maximize the advantages of the new scenarios such as self-publishing or repositories.
- The phenomenon of self-publishing is still in the expansion phase of what seems a Hype-Cycle [14] curve. At divulgative level it has many followers, however in science is necessary to incorporate elements that allow monographs to be evaluated with indicators that reflect quality in order to be included in databases or other resources. In addition, work on the digital edition requires a paradigm shift affecting the design, creation, editing, management, consultation and dissemination of works. Work is needed in multidisciplinary teams, as it has been discovered by contrasting the data provided by the specialists, who each have a different perspective, but complementary to the rest. The way in which all these steps of creating a monograph addressing inevitably influence the possibility to be assessed by an automated model.
- The environment in which it is intended to apply the set of indicators for evaluating scientific monographs is convoluted and little static. Limit is necessary before proceeding with the final design to cover all aspects of the current edition of scientific monographs so that in the future can be used by all systems that are now emerging.
- It is time to generate an assessment system to assess the quality of scientific monographs, which favors their inclusion in existing databases or other specifically generated for this and that, thus equate its influence and impact on the scientific journals (in the academic-scientific field) and reduce the current bias against Social Sciences and Humanities.
Some of the contributions expected in the future are:
- Results of the performance of the testings with battery indicators designed to verify their usefulness or modify them to improve their performance.
- Automation of the indicators studied, with an analysis of current tools for detecting whether they are implementing any of these indicators correctly or a study of the real possibilities of implementation of a tool that includes the evaluation system.
- Prototype design that allows use the model and to disseminate them to all those specialists involved in creating, editing, dissemination and evaluation of scientific monographs.