Preliminary Conference Agenda

Session
Papers 5: Measuring and Tracking Scientific Literature
Time:
Monday, 01/Apr/2019:
1:30pm - 3:00pm

Session Chair: Peter Darch, University of Illinois at Urbana-Champaign
Location: 2110/2111/2112

Presentations

Dead science: most resources linked in biomedical articles disappear in eight years

T. Zeng1,2, A. Shema1, D. Acuna1

1School of Information Studies, Syracuse University, United States of America; 2School of Information Management, Nanjing University, China

Scientific progress critically depends on disseminating analytic pipelines and datasets that make results reproducible and replicable. Increasingly, researchers make resources available for wider reuse and embed links to them in their published manuscripts. Previous research has shown that these resources become unavailable over time but the extent and causes of this problem in open access publications has not been explored well. By using 1.9 million articles from PubMed Open Access, we estimate that half of all resources become unavailable after 8 years. We find that the number of times a resource has been used, the international (int) and organization (org) domain suffixes, and the number of affiliations are positively related to resources being available. In contrast, we found that the length of the URL, Indian (in), European Union (eu), and Chinese (cn) domain suffixes, and abstract length are negatively related to the decay of a resource. Our results contribute to our understanding of resource sharing in science and provide some guidance to solve resource decay.



Are papers with open data more credible? An analysis of open data availability in retracted PLoS articles

M. Lesk1, J. Bially Mattern2, H. Moulaison Sandy3

1Rutgers, United States of America; 2Villanova, United States of America; 3University of Missouri, United States of America

Open data has been hailed as an important corrective for the credibility crisis in science. This paper makes an initial attempt to measure the relationship between open data and credible research by analyzing the number of retracted articles with attached or open data in an open access science journal. Using Retraction Watch, retracted papers published in PLoS between 2014 and 2018 are identified. Of the 152 total retracted papers, fewer than 15% attached their data. Since about half of the published articles have open data, and so few of the retracted ones do, we put forth the preliminary notion that open data, especially high quality and well-curated data, might imply scientific credibility.



The Spread and Mutation of Science Misinformation

A. Korsunska

Temple University, Philadelphia, PA, USA

As the media environment has shifted towards digitization, we have seen the roles of creating, curating and correcting information shift from professional “gatekeeper” journalists to a broader media industry and the general public. This shift has led to the spread of misinformation. Though political “fake news” is currently a popular area of study, this study investigates an-other related phenomenon: science misinformation. Consistent exposure to science misinformation has been shown to cultivate false beliefs about risks, causes and prevalence of illnesses and disincentivize the public from implementing healthy lifestyle changes. Despite the need for more research, science misinformation dissemination studies are scarce. Through a case study that traces the spread of information about one specific article through hyperlink citations, this study adds valuable insights into the inner workings of media networks, conceptualizations of misinformation spread and methodological approaches to multi-platform misinformation tracing. The case study illustrates the over-reliance of media sources on secondary information and the novel phenomenon of constantly mutating online content. The original misinformant is able to remove misleading in-formation, and as a result, all of the subsequent articles end up referencing misinformation to a source that no longer exists. This ability to update con-tent online breaks the information flow process: news stories no longer rep-resent a snapshot in time but instead living, mutating organisms, making any study of media networks increasingly complex.



Exploring Scholarly Impact Metrics in Receipt of Highly Prestigious Awards

D. J. Lee1, K. Mutya2, B. E. Herbert1, E. V. Mejia1

1University Libraries, Texas A&M University, United States of America; 2Industrial and Systems Engineering Department, Texas A&M University, United States of America

The authoritative data that underlies research information management (RIM) systems supports fine-grained analyses of faculty members’ research practices and output, data-driven decision making, and organizational research management. Administrators at Texas A&M University (TAMU) asked the University Libraries to develop institutional reports that describe faculty research practices and identify their research strengths. The library runs Scholars@TAMU (https://scholars.library.tamu.edu/) based on VIVO, a member-supported, open source, semantic-web software program, as the university’s RIM system. This paper explores the scholarly impact and collaboration practices of senior faculty members in the College of Engineering at TAMU and identifies relationships between their impact metrics and collaboration practices. Full professors were divided into three groups: (1) those who received highly prestigious (HP) awards, (2) those who received prestigious (P) awards, and (3) those who did not receive any awards categorized as either HP or P by the National Research Council. The study’s results showed that the faculty members with HP awards had significantly higher mean ranks for their total citation count, the citation count of their top cited article, their h-index, and their total number of publications than did the faculty members without any HP or P awards. The findings from this study can inform researchers, university administrators, and bibliometric communities about the use of awards as an indicator that corresponds to other research performance indicators. Furthermore, researchers could also use the study’s results to develop a machine-learning model that could identify those faculty who are on track to win HP awards in the future.