Home
 
abstracts



September 8th:
Tutorial Day

Bibliometric Crash Course
Wolfgang Glänzel, Centre for R&D Monitoring (ECOOM), Katholieke Universiteit Leuven,  Belgium / Juan Gorraiz, Bibliometrics Department, University of Vienna, Austria / Christian Gumpenberger, Bibliometrics Department, University of Vienna, Austria / Stefan Hornbostel, Institute for Research Information and Quality Assurance (iFQ), Germany / Sybille Hinze, Institute for Research Information and Quality Assurance (iFQ), Germany

Introduction to basic bibliometric terminology, concepts and data sources for participants who are short on experience in the field.


Navigation, Search and Analysis Features of the Web of Knowledge Platform
Tihomir Tsenkulovski, Customer Education Specialist TR

This Thomson Reuters workshop on the Web of Knowledge platform willconsist of three parts: navigation, customization and new search and analysis features of WOK 5.10 (released on 28 April 2013, with focus on Web of Science and Biosis Citation Index); use and features of EndNote Web and Researcher ID; and search, interpretation and export of data from Journal Citation Reports.
In the first part of the workshop, we will explore multiple search techniques in the Web of Knowledge. Exercises will underscore analysis features of each database, possibilities of various author searches, creation of proper citation reports and using the cited reference search in the Web of Science. We will learn exporting options and look at the connection between the Web of Science and Researcher ID, as well as the WOK platform and EndNote Web.

The second part will present the bibliographic software EndNote Web. In a few exercises we will learn how to import data to the EndNote Web library, format bibliographies, organize, share and manage them. We will also insert citations into texts using Cite-While-You-Write function and format word documents in specific bibliographic styles.

Scopus: a Tool for Bibliometricians
Arthur Eger, Elsevier

Introduction into Scopus, what is new in the latest update;
Performing a document search;
Performing beyond Scopus: expansion into the scientific web including half a billion websites, 5 patent offices and 80 selected sources;
Performing an author search and an affiliation search;
Creating a ‘Citation tracker';
Working with 'Journal analytics';
Display and create an ‘h-index’;
Triple A: Alerts, Advanced search and API’s.



September 9th:
Introduction to Scientometrics: Theoretical and Practical Aspects
History and Institutionalization of Scientometrics
Stefan Hornbostel, Institute for Research Information and Quality Assurance (iFQ), Germany / Wolfgang Glänzel, Centre for R&D Monitoring (ECOOM), Katholieke Universiteit Leuven,  Belgium

This lecture describes the context from which the field of scientometrics/ bibliometrics has emerged. The discipline of scientometrics characterised as a research field in the intersection of information science and science studies. Its emergence is closely linked to the growth of scientific information in the 20th century and the evolution from science to what de Solla Price called ‘big science’. The thematic and geographic diffusion of scientometrics since the 1960s, its present structure as well as the growing number of contemporary applications is discussed. Special attention is paid to the institutionalization process of the field; important milestones in the development of the field and in its institutionalization are presented. Finally, the consequences of the ‘perspective shift’ in bibliometrics through science policy use, economic interests and utilisation within the scientific reputation system,  as well as the enormous acceleration of the development of our field caused by the IT revolution during the last fifteen years are discussed.


Scientometric Indicators in Use: an Overview
Sybille Hinze, Institute for Research Information and Quality Assurance (iFQ), Germany

The use of scientometric indicators dates back to the 1960s and 1970s in the United States where the first Science Indicators report was published in 1973. Since then a variety of indicators emerged aiming at reflecting various aspects of science and technology and their development. The presentation will give an overview of indicators and their use in science policy making. The specific focus will be on indicators used in the context of research evaluation. In particular indicators applied to measuring research performance at the various levels of aggregation i.e. the macro, meso and micro level will be introduced. A range of aspects reflecting research performance will be addressed such as research productivity and its dynamic development, the impact of research, collaboration, and thematic specialization. Options and limitations of the indicators introduced will be discussed.


Mathematical Foundation of Scientometrics
Wolfgang Glänzel, Centre for R&D Monitoring (ECOOM), Katholieke Universiteit Leuven,  Belgium

Scientometrics, just as all metrics fields, relies on mathematics models, notably on mathematical statistics. The lecture briefly describes the mathematical foundation and basic postulates of bibliometrics, explains what publications and citations stand for, and how observations have to be assigned to the actual units of analysis. Although straightforward deterministic models can be used to describe many phenomena analysed in bibliometrics, the probabilistic approach provides the groundwork for more sophisticated models and indicators based on stochastic methods.  The lecture introduces, in particular, models for publication and citation processes and shows how scientometric indicators can be derived from these models. Special attention is paid to the typically “fat tail” of scientometric distributions. Another important issue that results from the stochastic approach is the issue of statistical reliability, notably of asymptotic unbiasedness and consistency of estimators, and the construction of confidence intervals for indicators.


Journal-Level Classifications - Current State of the Art
Éric Archambault, Science-Metrix, Canada

Journal-level classification play an important role for researchers who want to send a paper to a journal in a field of research that is relevant to a manuscript's content. Importantly also, journal-level classifications have been used to produce statistics on scientific production. We will briefly examine the origin of journal classifications in bibliometrics with the pioneering work played by CHI Research in the 1970s. Current classifications will be examined, as well as the various techniques that can be brought to bear when building a classification including clustering techniques and the use of human expertise.

Bibliometric Methods for Subject Delineation
Wolfgang Glänzel, Centre for R&D Monitoring (ECOOM), Katholieke Universiteit Leuven,  Belgium / Bart Thijs, Centre for R&D Monitoring (ECOOM), Dept MSI, Katholieke Universiteit Leuven, Belgium

Subject delimitation has become a central issue in so-called “domain studies”. Science policy addresses new emerging or complex interdisciplinary topics the delineation of which is particularly difficult. The delineation of these topics or domains is, on one hand, strongly related with information retrieval since often rather traditional “search strategies” using core journals, keywords and phrases can be applied but, on the other hand, goals and methods of advanced subject delineation essentially differ from those of usual retrieval.
Proper subject delineation is also necessary to find correct reference standards for benchmarking the research performance of the actors in the topic under study.
The first part of the lecture will focus traditional techniques that can easily be developed for and used in the online versions of bibliographic databases. 
The second part will introduce “bibliometrics-aided” retrieval. One of the main methodological characteristics of bibliometrics-aided retrieval is that bibliometrics allows including ‘metric’ components in search strategies. In the course of the lecture it will be shown how lexical and citation-based components can be used to gradually extend the original core (or seed) of surely relevant documents previously obtained from traditional literature searches.

The Web of Science offers the option of related records (based on bibliographic coupling) while Scopus uses keywords. Results can be filtered by their relevance and additional related documents can be added to the core set using thresholds. The application of direct citation links or more advanced textual similarities is again reserved for a rather small group of users with access to custom data. In this case, too, thresholds can be set to filter noise and to control precision and granularity.


The Application of Network Analysis in Science Studies: Common Theoretical Background for Broad Applications
Bart Thijs, Centre for R&D Monitoring (ECOOM), Dept MSI, Katholieke Universiteit Leuven, Belgium

Network analysis in sciencometrics provides a powerful set of tools and techniques to uncover the relations, structure and development among different actors in science. It is often referred to as Mapping of Science and can be applied to all entities associated with science like disciplines, journals, institutions and researchers. This lecture will focus mainly on different measures of relations between entities tackling both on the classical approaches as on the new techniques of network analysis in an application-oriented approach within a solid theoretical framework.
Relations based on citations and references include bibliographic coupling, co- and cross-citation. Other direct links between entities include co-authorship, institutional collaboration or international collaboration. Also lexical approaches like co-word analysis and text mining will be tackled.
Each of these measures have their own properties which can have strong implications on the applicability of the analytical techniques. In order to improve the distinctive capabilities of these measures new hybrid approaches have been proposed.
The lecture will also deal with several analytical tools and visualization techniques that are suitable for capturing the underlying structure. Clustering techniques like k-means or Ward’s hierarchical clustering are proven techniques to classify the entities modularity clustering has become a popular alternative.


Policy Use of Bibliometric Evaluation and its Repercussions on the Scientific Community
Koenraad Debackere, Katholieke Universiteit (K.U.) Leuven

Modern science policy firmly relies on bibliometric data & indicators to assess the scientific performance of research institutions, research groups and even individual researchers. In addition, benchmarking the scientific performance of countries and regions is another item on the agenda of evaluative science policy. During the presentation, the repercussions of this policy use of bibliometric evaluation will be dealt with along three lines of thought and reflection. First, recent trends and insights into data and indicator use for evaluative science policy will be highlighted. Second, an overview of current policy frameworks will be presented, taking into account the recent trend to link scientific performance to so-called smart specialization policies. Third, we will reflect upon the multifaceted impact those trends have (or may have) on the scientific community and (in the limit) the behavior of individual scientists.


September 10th:
Individual and Institutional Evaluation

Evaluating Research-Performing People and Institutions: Bibliometrics in Context
Erik Arnold,
Technopolis, Germany

As bibliometric techniques have become cheaper and easier to apply at the micro level, so they have become increasingly useful in evaluations of organisations and programmes, as well as in their earlier role of comparing research performance within disciplines and across countries. This presentation describes trends in R&D evaluation studies over the past 20-30 years, the evolution of evaluation tools and the growing role of bibliometrics.  It provides examples of organisational evaluations at the level of research funders, national performance-based funding systems and programme evaluations.  It also demonstrates opportunities to use of bibliometrics in planning and evaluating international research cooperation.



Advanced Bibliometric Methods for Evaluation, Ranking and Mapping of Scientific Research and its Institutions
Anthony van Raan, CWTS - Centre for Science and Technology Studies, Leiden University, The Netherlands

We present an overview of the latest developments in ‘measuring science’ based on bibliometric methods. Advanced bibliometric methods are an indispensable element next to peer review in research evaluation procedures at the level of research groups, university departments, institutes, and research programs of research councils and charities.
Our central topic is the role of citation and concept networks as a natural basis for both the construction of performance indicators as well as the construction of science maps. In this context we discuss the empirical behavior and functionality of indicators; definition of research fields; proper normalization procedures particularly in view of the large differences in citation-density within fields (MNCS, SNIP); consequences of  the skew distribution of citation impact within fields and within journals; potential and limitations of bibliometric indicators for engineering, social sciences and humanities fields.We show the potential of bibliometric science mapping as a unique instrument to discover patterns in the structure of scientific fields, to identify processes of knowledge dissemination, to visualize the dynamics of scientific developments as well as to map research related to important socio-economic themes. A special focus will be on ranking and benchmarking of universities, particularly the Leiden Ranking 2013 Version in comparison with the THE and Shanghai rankings.


New Developments in Bibliometrics and Research Assessment
Henk Moed, Senior Scientific Advisor, Elsevier


This presentation consists of two parts. The first part is an introduction to the use of bibliometric indicators in research assessment, aimed to show the boundaries of the playing field, and to highlight important rules of the game. It underlines the potential value of bibliometrics in consolidating academic freedom. It stresses the relevance  of assessing the societal impact of research, but emphasizes at the same time that one must be cautious with the actual application of such indicators in a policy context. 

The second parts identifies major trends in the field of bibliometrics, and focuses on the creation of large, compound databases by combining different datasets.. Typical examples are the integration of citation indexes with patent databases, and with "usage" data on the number of times articles are downloaded in full text format from publication archives; the analysis of full texts to characterize the context of citations; and the combination of bibliometric indicators with statistics obtained from national surveys. Significant outcomes are presented of studies based on such compound databases, and their technical and conceptual difficulties and limitations are discussed.


An Exploration of German Research University Collaboration and Comparative Analysis of Publication Output and Impact: Collaboration between German Universities, German-European Collaboration and German-International (outside Europe)
Jeff Clovis, Senior Director, Customer Education & Sales Support, Scientific & Scholarly Research, Scientific, Thomson Reuters

Research does not stop at national boundaries. International collaboration has become a necessary part of research especially for securing funding internally and externally. The European Union’s publication output continues to grow exponentially.   This presentation will take a look at German research university collaboration, focusing on the top 10 German Research Universities in several different major categories in different periods of time. The publication output, impact and collaboration network will be explored as well as a comparison of impact and relative impact.


Research Evaluation at the University of Zurich
Hans-Dieter Daniel, University of Zurich, Switzerland

The Evaluation Office of the University of Zurich is mandated to organize and supervise evaluations of research, teaching, management and administration (cf. http://www.evaluation.uzh.ch/index_en.html). For the evaluation of research in the sciences, life sciences and medicine a bibliometric analysis of the publications of the institute or department under evaluation is carried out based on the list of publications provided by the members of the institute. All publications are considered, regardless of the person’s employment at the date of publication – that is, a ‘current potential analysis’ is carried out. The following bibliometric indicators are used: (1) Number of Citations, (2) Journal-based Reference Citation Value, (3) Average Journal Impact Factor, (4) Weighted Average Journal Impact Factor for all Subject Categories, (5) Journal-based Relative Citation Eminence, (6) Publication Strategy Index, and (7) Subject-category-based Relative Citation Eminence. Statistical outliers (cf. Bornmann et al., 2008) are included in the calculation of the bibliometric indicators. If statistical outliers have noticeable effects on the overall results of the citation analysis, it will be mentioned in the bibliometric report. Data sources for performing citation analysis used at the Evaluation Office of the University of Zurich are: Web of Knowledge, Scopus, Chemical Abstracts, INSPIRE, MathSciNet, and Google Scholar (cf. Neuhaus & Daniel, 2008, Daniel et al., 2009).

References

Bornmann, L., Marx, W., Schier, H., Rahm, E., Thor, A., & Daniel, H.-D. (2009). Convergent validity of bibliometric Google Scholar data in the field of chemistry - citation counts for papers that were accepted by Angewandte Chemie International Edition or rejected but published elsewhere, using Google Scholar, Science Citation Index, Scopus, and Chemical Abstracts. Journal of Informetrics, 3(1), 27-35.

Bornmann, L., Mutz, R., Neuhaus, C., & Daniel, H.-D. (2008). Citation counts for research evaluation: Standards of good practice for analyzing bibliometric data and presenting and interpreting results. Ethics in Science and Environmental Politics, 8(1), 93-102. DOI: 10.3354/esep00084

Neuhaus, C. & Daniel, H.-D. (2008). Data sources for performing citation analysis: an overview. Journal of Documentation, 64(2), 193-210. DOI: 10.1108/00220410810858010


Off to New Horizons: the Crucial Role of Libraries in Bibliometric Analyses
Juan Gorraiz, Bibliometrics Department, University of Vienna, Austria / Christian Gumpenberger, Bibliometrics Department, University of Vienna, Austria

Bibliometrics is ideal for librarians to develop and provide innovative services for both academic and administrative university staff. In doing so they make sure to actively participate in the development of new strategies and in fostering innovation. Peer-review is increasingly complemented by quantitative methods like bibliometrics, and librarians are predestined to fill this role and strengthen their on-campus position. Furthermore bibliometrics is an emerging field in “Information Science”, thus librarians should make use of their experiences gained from bibliometric services provided or projects engaged in and disseminate their findings in the scientific community.
The Bibliometrics Department in Vienna has been implemented within the Library and Archive Services of the University of Vienna. It can serve as a role model for other academic librarians who wish to become more engaged in this field or even plan to implement according services.


The Role of Indicators in Informed Peer Review: Practical Observations
Rainer Lange, German Council of Science and Humanities (Wissenschaftsrat), Germany

Professional scientometric analysis is customarily presented with a cautionary note: indicators, it is stated, do not speak for themselves but need to be interpreted by experts. Drawing on several years of research assessment practice earned while organizing the pilot phase for a German research rating, I will discuss both the added value of indicators for peer review and the corrective function of peer review for scientometric analysis. It is claimed that embedding scientometric analysis in peer review processes severely constrains the scope and depths of technical analyses.


September 11th:
Seminars Day 1
Data Cleaning and Processing
Matthias Winterhager, Bielefeld University, Institute of Science and Technology Studies (IWT), Germany

The quality of bibliometric analyses is heavily depending on appropriated handling of the relevant raw data fields. Depending on the level of aggregation and the target objects under study, various issues of accuracy can come up with citation links and several data elements (document type, author, institution, country, journal, field and discipline). 

We will have a close look at the relevant data fields in modern citation databases like Web of Science or Scopus to see if they are “ready to use” for doing all kinds of bibliometric studies. Main problems of data quality will be shown and major types of errors and their consequences will be discussed.
Standardisation, verification and the introduction of identifiers can help to overcome problems of data quality. Data processing approaches of the German competence centre for bibliometrics will be demonstrated.


Author Identification
Wolfgang Glänzel, Centre for R&D Monitoring (ECOOM), Katholieke Universiteit Leuven,  Belgium

The seminar lecture focuses on the identification of authors and the disambiguation of their names. This issue has become an key prerequisite for individual-level bibliometrics. Also the identification of author self-citations requires correct assignment of names to authors. Although Thomson Reuters and Elesevier offer the use of Researcher-/Author-IDs, thorough author identification and name disambiguation is only partially feasible on the basis of these IDs. Typical problems in dealing with these IDs will be discussed.
In the course of the lecture it will be shown how standardisation of names and initials in combination with institutional assignment, IDs and external sources can be used to identify authors in the Web of Science and SCOPUS databases.

Computerised techniques based, for instance, on N-grams can essentially facilitate the matching of external sources such as author publication lists or CVs with bibliographic databases. This approach is briefly described in this lecture. Because of possible type I and II errors and the sensitivity of the matter, final manual corrections of the results of such automated processes remain indispensable.

Validating Publication Data: Objectives, Tools, Processes and Results
Marion Schmidt, Institute for Research Information and Quality Assurance (iFQ), Germany

A publication corpus for a bibliometric analysis of an institution or research group is commonly delineated by searching for person names in combination with institutional addresses. In order to ensure the highest data quality possible, scientists may be asked to validate the data retrieved for them. In this paper, a tool designed for this purpose, the validation process itself and results are presented with respect to the example of a bibliometric assessment of chemistry and physics institutes in three German universities.


Impact Measures - Hands-on Session
Wolfgang Glänzel, Centre for R&D Monitoring (ECOOM), Katholieke Universiteit Leuven,  Belgium / Juan Gorraiz, Bibliometrics Department, University of Vienna, Austria

The seminar on impact measures will first shed light on the best known and most controversial indicator, namely Garfield’s journal impact factor. Its strengths and weaknesses as well as its correct use will be discussed. Moreover the corresponding analytical tool, Thomson Reuter's JournalCitation Reports will be demonstrated.

Alternative impact measures like Eigenfactor metrics, SJR and SNIP have been introduced within the recent years and will be presented to complete the picture.

The theoretically imparted knowledge will finally be consolidated in practical exercises.



September 12th:
Seminars Day 2
Subject Normalization for Citation Analysis
Wolfgang Glänzel, Centre for R&D Monitoring (ECOOM), Katholieke Universiteit Leuven,  Belgium

Subject normalisation for citation analysis is a fundamental requirement for citation analysis in a multidisciplinary environment. Recently two fundamental approaches exist, the so-called source- and citing-side normalisation, or, using another terminology, the a priori and a posteriori normalisation. Both methods will be introduced and described. Although the a priori normalisation represents a more advanced methodology, its application is reserved for a rather small group of users. The reason is the access to and the processing of the complete database (Web of Science or SCOPUS) since in this approach citations have to be normalised before they are counted. Knowledge about this normalisation technique is, however, important because this future-oriented methodology is already applied by larger bibliometric centres.
The second method is rather conservative, but can be applied by any user who has access to the online version of the Web of Science or SCOPUS. The main characteristic of a posteriori normalisation is that citation counts are normalised after counting on the basis of proper reference values.
Advantages and disadvantages of both methods are discussed and examples for the second approach are calculated.



The Funding Acknowledgements in the Thomson Reuters Database: Potentials and Problems of a New Bibliometric Data Source
Daniel Sirtes, Institute for Research Information and Quality Assurance (iFQ), Germany

Since August 2008 the Web of Science database (WoS) includes funding acknowledgements, information on the agency or organization that provided financial aid for executing the research underlying the published article. Furthermore, if available, the acknowledgements include the specific program or even the specific grant of that agency. With this kind of information at hand, new kinds of inquiry into the science system are made possible. Following an overview of the structure, coverage and the special problems arising from the non-unified funding organizations entries (and how to solve them), two examples of such new analyses for the German Research Foundation are provided.


Research Collaboration Measured by Co-Authorship
Wolfgang Glänzel, Centre for R&D Monitoring (ECOOM), Katholieke Universiteit Leuven,  Belgium

Co-authorship can be used as a proxy for research collaboration at higher levels of aggregation, e.g., in the case of institutional or international collaboration. But even at the level of research teams and individual scientists, co-authorship patterns reveal important information about main actors and their role in the network of scholarly communication. 
In the first part of the lecture the analysis of co-authorship networks at the micro, meso and macro level is described. The strength of co-authorship links among individual scientists, institutions or countries can preferably be determined using appropriate similarity measures. Co-authorship networks can readily be visualised applying suitable software that is available and free for non-commercial use. In this lecture “Pajek” will be used.

In the second part, bibliometric indicators for the analysis of research collaboration at the meso and macro level will be introduced. It will be shown how indicators and similarity measures can be calculated using the “analyse results” and “citation report” tool in the online version of the Web of Science.



Mapping Science (on the Basis of Bibexcel Software)
Olle Persson, Sociology Department, Umeå universitet, Sweden

Purpose: To introduce the basic skills needed to produce maps with special reference to bibliometric data.
Learning outcomes:
The students learn how to:
(1) Prepare data including converting downloaded records, extracting and editing data,
(2) Calculate measures of relatedness including citations, co-citations and shared references, key word analysis,
(3) Make maps using Pajek and similar drawing software.
Teaching method: Short lectures with exercises.
Students should download latest version of Bibexcel and Pajek from the Internet.

Study material will be made available in advance of course start.


September 13th:
Seminars Day 3
Workshop: Contribution to Research Evaluation 1 & 2