Home
 
abstracts



Sunday, September 4th:
Aula Hotel Granada Center

Bibliometric Crash Course
Wolfgang Glänzel, Centre for R&D Monitoring (ECOOM), KU Leuven,  Belgium / Juan Gorraiz, Bibliometrics Department, University of Vienna, Austria / Sybille Hinze, German Centre for Higher Education Research and Science Studies - DZHW, Germany

Introduction to basic bibliometric terminology, concepts and data sources for participants who are short on experience in the field.


Web of Science™ - The World’s Most Trusted Citation Index Covering the Leading Scholarly Literature. An Examination of Citation Navigation Across Diverse Content Sets to Provide Comprehensive Results Across Disciplines and Research Insight.
Massimiliano Carloni, Global Solutions Support Specialist, Thomson Reuters

The Web of Science platform is the search and discovery choice for 7,300+ academic and research institutions, national governments, funding organizations, and publishing organizations in 120+ countries worldwide. In this tutorial we will explore research and ideas from different disciplines and the content - including cover-to-cover indexing of the world’s most important multidisciplinary research covering scholarly journals, books, proceedings, published data sets and data studies and patents – all connected together through a citation network of over 1 BILLION cited references.

Scopus: Elsevier's A&I database and its use  for researchers and
research offices

Tomaso Benedet, Solutions Sales Manager - Research Intelligence, Elsevier

A brief tutorial to understand how Scopus can be of help to your institution. Through this database you could easily monitor all the most recent development in your area of interest. Scopus also offers a complete suite of tools that will help you evaluate worldwide academic production to look for possible collaboration, author profiles and institutional production. During this session we go through all the different kinds of searches that could be made in Scopus, hoe to use Scopus results and how to evaluate candidate journals for publication with all the information provided by Scopus.

 

Altmetrics & Traditional Metrics. Time for a Rethink.
Stephan Büttgen, Director of Sales of Plum Analytics in Europe

Clinical research does not get cited as much as basic science. This creates problems for researchers in getting funding, early-career researchers opt out of translational research, hospitals and research institutions cannot showcase their talents and more. It hurts us all when our researchers don't want to turn basic science into ways to treat illness and disease, or doctors pioneering new treatments are not motivated to publish them. Rather than lament the problem, we want to measure clinical impact. We want to help researchers and institutions understand what is impactful in the clinical realm. This session introduces Clinical Citations as a way to start to measure clinical impact. Clinical Citations find references to research in clinical alerting services, clinical guidelines, systematic reviews and clinical trials. You can use these Clinical Citations to help you understand the impact of your clinical research, to apply for funding and more.



Monday, September 5th:
Facultad de Ciencias del Trabajo
Bibliometrics reviewed: History, institutionalization, and concepts
Stefan Hornbostel, German Centre for Higher Education Research and Science Studies - DZHW, Germany

The emergence of bibliometrics is closely linked to the growth of scientific information in the 20th century and to what de Solla Price called the evolution from “little science to big science”. Initially, bibliometrics and its early concepts were oriented towards library access, bibliographic databases, and information services. However, since the 1960s other disciplines, especially the sociology of science, inspired the development of a new and interdisciplinary understanding of bibliometrics. In the 1970s and 1980s the increasing information needs on behalf of science policymakers boosted the institutionalization of bibliometrics as an own field of research, while at the same time this new application context necessitated new concepts. Little by little, a specific bibliometric methodology aiming to be suitable for today’s applications such as formula-based funding systems, assessments, evaluations etc. came into being. The lecture will present this development process and, thereby, demonstrate common concepts of bibliometrics.


Introduction to Bibliometric Data Sources
Wolfgang Glänzel, Centre for R&D Monitoring (ECOOM), Katholieke Universiteit Leuven,  Belgium / Juan Gorraiz, Bibliometrics Department, University of Vienna, Austria


This talk is about the specific requirements for bibliographical data sources to be met in regard to their suitability for bibliometric application. Furthermore relevant issues like coverage, representativeness and selection criteria are considered.
Any appropriate bibliography can potentially serve as data source for bibliometric purposes, however, comparative studies and large-scale analyses require large standardized data sources like bibliographic databases.
After providing some background information, the main features of bibliographic databases are discussed with special focus on the question, which of them are useful, essential or even indispensable for bibliometric use (most databases are rather designed for information retrieval).
In this talk some basic database features are introduced exemplarily from different products. A distinction is made between
subject-specific and multidisciplinary databases. In particular, the pros and cons of the three major multidisciplinary data sources – Web of Science, SCOPUS and Google Scholar – are discussed.

In addition, subject- specific databases (e.g. “MathSciNet”,“SciFinder”), patent databases (e.g. “Derwent Innovations Index”, Espacenet (PATSTAT)) or pilot projects for citation indexing on the web (e.g. “BASE”, “CiteseerX”– all based on open access archives) are presented and examined critically regarding their data enrichment potential in bibliometric analyses.

Scientometric Indicators in Use: an Overview
Sybille Hinze, German Centre for Higher Education Research and Science Studies - DZHW, Germany / Wolfgang Glänzel, Centre for R&D Monitoring (ECOOM), KU Leuven,  Belgium

The use of scientometric indicators dates back to the 1960s and 1970s in the United States where the first Science Indicators report was published in 1973. Since then a variety of indicators emerged aiming at reflecting various aspects of science and technology and their development. The presentation will give an overview of indicators and their use in science policy making. The specific focus will be on indicators used in the context of research evaluation. In particular indicators applied to measuring research performance at the various levels of aggregation i.e. the macro, meso and micro level will be introduced. A range of aspects reflecting research performance will be addressed such as research productivity and its dynamic development, the impact of research, collaboration, and thematic specialization. Options and limitations of the indicators introduced will be discussed.


Bibliometrics from the perspective of a university
Juan Gorraiz, Bibliometrics Department, University of Vienna, Austria

Bibliometrics is ideal for librarians to develop and provide innovative services for both academic and administrative university staff. The Bibliometrics and Publication Strategies Department in Vienna has been implemented within the Library and Archive Services of the University of Vienna. It can serve as a role model for other academic librarians who wish to become more engaged in this field or even plan to implement according services.
This presentation gives an overview of all bibliometric services offered by the department and will then focus on those related to individual evaluation and particularly to professorial appointments.
The Vienna University bibliometric approach relies on a variety of basic, simple indicators and further control parameters in order to address the multi-dimensionality of the problem and to foster comprehensibility. Our “top counts approach” allows an appointment committee to pick and choose from a portfolio of indicators according to the actual strategic

alignment. Furthermore, control and additional data help to understand disciplinary publication habits, to unveil concealed aspects and to identify individual publication strategies of the candidates or individual researchers to be evaluated . Bibliometrics only shines a light on quantitative aspects and should never be applied irrespective of the given qualitative context.

 

Evaluation and Mapping of Scientific Research and Support for the Individual Researcher
Ton van Raan, CWTS - Centre for Science and Technology Studies, Leiden University, The Netherlands

We present an overview of the latest developments in ‘measuring science’ based on bibliometric methods. Our central topic is the role of citation- and concept-networks and their combination as a natural basis for both the construction of performance indicators as well as the construction of science maps. We present real-life examples of practical applications of advanced bibliometric methods in the evaluation and mapping of universities, departments and institutes. These applications also offer individual scientists instruments to  explore their own research field. We show how cluster-based normalization is used to tackle the problem of the large differences in citation-density within fields. The strong strategic potential of science mapping based on new CWTS bibliometric instruments such as the VoS-viewer and CitNetExplorer is shown by recent work on the study of ‘retarded innovations’. These developments offer new tools for the individual researcher to explore their own ‘cognitive environment’. We will also discuss the new version of the Leiden Ranking in comparison with other prominent university rankings.

Policy Use of Bibliometric Evaluation and its Repercussions on the Scientific Community
Koenraad Debackere, Katholieke Universiteit (K.U.) Leuven

Modern science policy firmly relies on bibliometric data & indicators to assess the scientific performance of research institutions, research groups and even individual researchers. In addition, benchmarking the scientific performance of countries and regions is another item on the agenda of evaluative science policy. During the presentation, the repercussions of this policy use of bibliometric evaluation will be dealt with along three lines of thought and reflection. First, recent trends and insights into data and indicator use for evaluative science policy will be highlighted. Second, an overview of current policy frameworks will be presented, taking into account the recent trend to link scientific performance to so-called smart specialization policies. Third, we will reflect upon the multifaceted impact those trends have (or may have) on the scientific community and (in the limit) the behavior of individual scientists.


The application context of research assessment methodologies
Henk F. Moed, Independent researcher and scientific advisor

The lecture distinguishes two roles of research assessment methodologies in research management and policy: instrumental to a specific assessment process and critical-enlightening. It is argued that the choice of metrics to be applied in a research assessment process depends upon the unit of assessment, the research dimension to be assessed, and the purposes and policy context of the assessment. An indicator may be highly useful within one assessment process, but less so in another. There is no such thing as a performance measure of uniform validity, applicable in all circumstances. A typical example of a critical-enlightening study is a “meta-analysis” of the units under assessment in which metrics are not used as tools to evaluate individual units, but to reach policy inferences regarding the objectives and general setup of an assessment process.
Focusing on their instrumental role, six examples illustrate how indicators proposed during the past decades fit into the wider context of the developers and align with specific policy objectives: The journal impact factor and related measures developed by Eugene Garfield; A relative indicator of national citation impact proposed by the ISSRU team in Budapest; a trend analysis of a research group’s short term citation impact developed at CWTS in Leiden; the Hirsh index of an individual’s research output; altmetrics derived from social media; and a bibliometric model of the development of a national research system.

It is argued that yet non-explicated, un-reflected assumptions and hidden interests may influence indicator development. Such assumptions and interests are not necessarily “bad” or distorting influences as long as they are made explicit. They may emerge from the fact that developers of assessment methodologies may themselves be subjected to the performance measurements they develop, and that indicators are increasingly becoming marketing tools of research organisations and the information industry. It is important to analyze such influences without affecting the integrity of developers or users.  The lecture presents a series of examples and considerations that further clarify this issue.

 

Designing Effective Queries
Stephan Gauch, German Centre for Higher Education Research and Science Studies - DZHW & Humboldt University of Berlin, Germany

The quality of bibliometric approaches, both explorative as well as evaluative, is strongly influenced by the way search queries to bibliometric databases are constructed. This becomes apparent when beginning scholars and practitioners of bibliometrics are shocked when they learn that the scientific field or topic they thought could be covered by a simple search term is far better covered by pages and pages of carefully selected and intricate combinations of  search terms, journal sets and classifications. In this session we will explore good practice examples to design “effective queries”. Participants will be shown how to get the most from expert knowledge, how to iteratively optimize queries, how to carefully use truncating techniques of terms to cover more ground and how to avoid pitfalls such as over-optimization or queries that are “too fuzzy around the edges”.



Tuesday, September 6th:
Facultad de Ciencias del Trabajo

Data Cleaning and Processing
Christine Rimmert, Institute for Interdisciplinary Studies of Science / AG Bibliometrie, Bielefeld University, Germany

The quality of bibliometric analyses is heavily depending on appropriated handling of the relevant raw data fields. Depending on the level of aggregation and the target objects under study, various issues of accuracy can come up with citation links and several data elements (document type, author, institution, country, journal, field and discipline). We will have a close look at the relevant data fields in modern citation databases like Web of Science or Scopus to see if they are "ready to use" for doing all kinds of bibliometric studies. Main problems of data quality will be shown and major types of errors and their consequences will be discussed. Standardisation, verification and the introduction of identifiers can help to overcome problems of data quality.
Data processing approaches of the German competence centre for bibliometrics will be demonstrated.

 

Author Identification
Wolfgang Glänzel, Centre for R&D Monitoring (ECOOM), Katholieke Universiteit Leuven,  Belgium

The seminar lecture focuses on the identification of authors and the disambiguation of their names. This issue has become a key prerequisite for individual-level bibliometrics. Also the identification of author self-citations requires correct assignment of names to authors. Although Thomson Reuters and Elsevier offer the use of Researcher-/Author-IDs, thorough author identification and name disambiguation is only partially feasible on the basis of these IDs. Typical problems in dealing with these IDs will be discussed.
In the course of the lecture it will be shown how standardisation of names and initials in combination with institutional assignment, IDs and external sources can be used to identify authors in the Web of Science and SCOPUS databases.

Computerised techniques based, for instance, on N-grams can essentially facilitate the matching of external sources such as author publication lists or CVs with bibliographic databases. This approach is briefly described in this lecture. Because of possible type I and II errors and the sensitivity of the matter, final manual corrections of the results of such automated processes remain indispensable.

Subject Normalization for Citation Analysis
Wolfgang Glänzel, Centre for R&D Monitoring (ECOOM), Katholieke Universiteit Leuven,  Belgium

Subject normalisation for citation analysis is a fundamental requirement for citation analysis in a multidisciplinary environment. Recently two fundamental approaches exist, the so-called source- and citing-side normalisation, or, using another terminology, the a priori and a posteriori normalisation. Both methods will be introduced and described. Although the a priori normalisation represents a more advanced methodology, its application is reserved for a rather small group of users. The reason is the access to and the processing of the complete database (Web of Science or SCOPUS) since in this approach citations have to be normalised before they are counted. Knowledge about this normalisation technique is, however, important because this future-oriented methodology is already applied by larger bibliometric centres.
The second method is rather conservative, but can be applied by any user who has access to the online version of the Web of Science or SCOPUS. The main characteristic of a posteriori normalisation is that citation counts are normalised after counting on the basis of proper reference values.
Advantages and disadvantages of both methods are discussed and examples for the second approach are calculated.



Journal Impact Measures
Wolfgang Glänzel, Centre for R&D Monitoring (ECOOM), KU Leuven,  Belgium / Juan Gorraiz, Bibliometrics Department, University of Vienna, Austria

The seminar on impact measures will first shed light on the best known and most controversial indicator, namely Garfield’s Journal I mpact F actor. Its strengths and weaknesses as well as its correct use will be discussed thoroughly. Moreover the corresponding analytical tool, Thomson Reuter’s Journal Citation Reports will be demonstrated.

Alternative impact measures like Eigenfactor metrics, SJR and SNIP have been introduced within the recent years and will be presented to complete the picture. The theoretically imparted knowledge will finally be consolidated in practical exercises.


Research Management Solutions – InCites, an Objective Analysis of People, Programs, and Peers
Massimiliano Carloni, Global Solutions Support Specialist, Thomson Reuters

With customized citation data and metrics, and multidimensional profiles on the leading research institutions and journals, InCites supports research evaluation and assessment, collaboration activities, and competitive analysis using the industry’s most trusted content, tools, and services. In this presentation we will examine  InCites robust visualization and reporting tools designed to help identify new trends, evaluate new and existing collaborations, and analyze performance.



Wednesday, September 7th:
Facultad de Ciencias del Trabajo
The Application of Network Analysis in Science Studies: Common Theoretical Background for Broad Applications
Bart Thijs, Centre for R&D Monitoring (ECOOM), Dept MSI, Katholieke Universiteit Leuven, Belgium

Network analysis in scientometrics provides a powerful set of tools and techniques to uncover the relations, structure and development among different actors in science. It is often referred to as Mapping of Science and can be applied to all entities associated with science like disciplines, journals, institutions and researchers. This lecture will focus mainly on different measures of relations between entities tackling both on the classical approaches as on the new techniques of network analysis in an application-oriented approach within a solid theoretical framework.
Relations based on citations and references include bibliographic coupling, co- and cross-citation. Other direct links between entities include co-authorship, institutional collaboration or international collaboration. Also lexical approaches like co-word analysis and text mining will be tackled.
Each of these measures has their own properties which can have strong implications on the applicability of the analytical techniques. In order to improve the distinctive capabilities of these measures new hybrid approaches have been proposed.
The lecture will also deal with several analytical tools and visualization techniques that are suitable for capturing the underlying structure. Clustering techniques like k-means or Ward’s hierarchical clustering are proven techniques to classify the entities modularity clustering has become a popular alternative.


Research Collaboration Measured by Co-Authorship
Wolfgang Glänzel, Centre for R&D Monitoring (ECOOM), Katholieke Universiteit Leuven,  Belgium

Co-authorship can be used as a proxy for research collaboration at higher levels of aggregation, e.g., in the case of institutional or international collaboration. But even at the level of research teams and individual scientists, co-authorship patterns reveal important information about main actors and their role in the network of scholarly communication. 
In the first part of the lecture the analysis of co-authorship networks at the micro, meso and macro level is described. The strength of co-authorship links among individual scientists, institutions or countries can preferably be determined using appropriate similarity measures. Co-authorship networks can readily be visualised applying suitable software that is available and free for non-commercial use.

In the second part, bibliometric indicators for the analysis of research collaboration at the meso and macro level will be introduced. It will be shown how indicators and similarity measures can be calculated using the “analyse results” and “citation report” tool in the online version of the Web of Science.

 

How to evaluate and benchmark your Institution research output with SciVal
Tomaso Benedet, Solutions Sales Manager - Research Intelligence, Elsevier

Nowadays having the right tool to evaluate academic output at your institution is a key point in order to be able to design an informed strategic plan. With SciVal you will be able to see in a snapshot all the details about research at your institutions, deep dive into the details of each single publication. You will also be able to properly benchmark your authors institution or even country against parallel entities in order to understand which direction to take or where to invest in collaboration. This service will help your research office but will also be a fundamental tool for your researchers that will be able to evaluate their profile and to analyze hot topics in their own area of interest.



Thursday, September 8th:
Facultad de Ciencias del Trabajo
The dawn of a new metrics era
Juan Gorraiz, Bibliometrics Department, University of Vienna, Austria

It is impossible to ignore the omnipresent and substantial influence of social media on our daily life activities, which also increasingly calls for changes in scientific communication processes. This is certainly a new challenge for scientists, librarians and research administrators alike.
This talk
provides general background information, a historical overview as well as a critical discussion about the relevance of new metrics in comparison to traditional citation metrics . It points out the diversity, heterogeneity as well as the shortcomings of these new metrics and will address the lacking standardization, which is a critical issue for the acceptance within the scientific community.
Furthermore, selected results, comparisons and models will be presented and discussed, like a potential usefulness of new metrics for assessing research “impact” in the humanities.

Finally, it will be debated how scientists can cope with this new development, particularly to which degree they are obliged to actively participate in the self-promotion game and how this would change their traditional role as a scientist.

 

Altmetrics
Rodrigo Costas, CWTS - Centre for Science and Technology Studies, Leiden University, The Netherlands

The recent ‘explosion’ of tracking tools that have accompanied the surge of web based information instruments has also open the possibility of measuring how new research publications are ‘read’, tweeted, shared, commented, discussed, rated, liked, etc. in an online and open environment. All these online events leave ‘traces’ around publications, thus allowing for the calculation of new metrics, which has given birth to the recent term of ‘altmetrics’. These new metrics are expected to work as evidence of impact that can inform research evaluation and strategic decisions in science policy. However their actual meaning, validity and usefulness are still open questions. A review of the most important empirical research around altmetrics will be discussed in order to understand better their main characteristics and features. A more conceptual discussion to frame these new metrics will be presented in order to provide hints on how these new metrics could be considered for practical purposes.

 

Societal Impact
Nicolas Robinson Garcia, INGENIO (UPV-CSIC), Universitat Politècnica de València, Spain / Daniel Torres-Salinas, Universidad de Navarra and Universidad de Granada (EC3metrics & Medialab UGR), Spain

Recently there is an increasing pressure on the development of indicators and methodologies that can offer evidences of the societal impact of researchers’ activity. This presentation will offer a comprehensive overview on the definition of societal impact, types of impact, and the attribution problem when searching for potential indicators. A special attention will be given to altmetric indicators and their potential role in tracing social engagement and its relation with societal impact. Examples of potential uses and current lines of work will be presented.

 

Bibliometrics and (e)valuation of research(er): On ethics and responsibility in "numbering" research(er)
Ulrike Felt, Professor of Science and Technology Studies and Dean of the Faculty of Social Sciences, University of Vienna, Austria

„In rankings we trust“ titles a recent paper addressing the puzzle why internet users in contemporary society trust in distributed forms of assessment and ranking practices. In academia mappings, rankings and indicators are supposed to allow for a constant comparative observation of how institutions, knowledge and people develop. When it comes to discussing publication patterns, how they are mapped by bibliometric studies and how that in turn relates to academic assessment and increasingly also to the self-understanding of (young) researchers, this presentation will explore two key-questions:
1)    How can we understand the relation between the technological possibilities of (self-)observation through bibliometric analysis and other assessment supporting practices, on the one hand, and  publication patterns and practices in the social sciences and humanities on the other hand?
2)    Why, when and up to what degree do we develop trust in such measures, i.e. trust in numbers, in maps and other ordering devices?
The presentation will address change in the research system and its impact on disciplinary orders, its relation to publication practices and, finally, how bibliometrics becomes an increasingly powerful intermediary between research practices and their valuation within academic institutions, careers and beyond.




Friday, September 9th:
Facultad de Ciencias del Trabajo

Google Scholar: journal-, article- and author-level metrics
Isidro F. Aguillo,Cybermetrics Lab-Scimago (IPP-CSIC), Spain

The presentation revises the metrics characteristics of the Google Scholar (GS) database with special attention to the GS Metrics (journals) and GS Citations (profiles) products. The topics analyzed include comparative coverage with other citation databases, description and analysis of the indicators provided directly by GS or indirectly by 'Publish or Perish' software and the ranking capabilities of the results. The main shortcomings of the different services are described and suggestions for the avoiding or limiting their impact in bibliometric studies are introduced. A case study will be presented using institutional profiles from GS Citations for Spain and deriving bibliometric indicators from author-level metrics.

 

Bibliometrics and Open Access
Eric Archambault, 1science & Science-Metrix, Canada

The Open Access (OA) model for scientific publications has been examined for years by academics who have argued that it presents advantages in increasing accessibility and, consequently, in increasing the impact of papers.

It has been noted that OA availability has increased steadily over the years. However, current measurement has seriously underestimated the proportion of OA peer-reviewed articles.  Therefore, it is necessary to develop new measurement methods. One challenge is to distinguish more clearly between Gold OA, Hybrid OA and non-fully Gold journals, and self-archiving (‘Green OA’).

This presentation examines the results of recent studies assessing the free availability of scholarly publications during different time periods and the proportion of Open Access Papers published in peer-reviewed journals at different levels. Different types of growth in freely available papers have been identified and analysed.

In conclusion, best practices for institutional repository management will be mentioned and opportunities and challenges faced by the OA model will be examined.