Information Today, Inc. Corporate Site KMWorld CRM Media Streaming Media Faulkner Speech Technology Unisphere/DBTA
PRIVACY/COOKIES POLICY
Other ITI Websites
American Library Directory Boardwalk Empire Database Trends and Applications DestinationCRM Faulkner Information Services Fulltext Sources Online InfoToday Europe KMWorld Literary Market Place Plexus Publishing Smart Customer Service Speech Technology Streaming Media Streaming Media Europe Streaming Media Producer Unisphere Research



News & Events > NewsBreaks
Back Index Forward
Twitter RSS Feed
 



NISO Project Brings Scientific Evaluation Into the 21st Century With Altmetrics
by
Posted On June 25, 2013
The National Information Standards Organization (NISO) announced a new, two-part project on June 20 “to study, propose, and develop community-based standards or recommended practices in the field of alternative metrics.”

The two-part project is funded by a $207,500 grant from the Alfred P. Sloan Foundation and is expected to take 2 years to complete. “Citation analysis lacks ways to measure the newer and more prevalent ways that articles generate impact such as through social networking tools like Twitter, Facebook, or blogs,” according to Nettie Lagace, NISO’s associate director for programs, who explains some details in the announcement. “Additionally, new forms of scholarly outputs, such as datasets, software tools, algorithms, or molecular structures are now commonplace, but they are not easily—if at all—assessed by traditional citation metrics. These are two among the many concerns the growing movement around altmetrics is trying to address.” The Sloan Foundation’s Joshua M. Greenberg notes in the release that “[w]ith its long history of crafting just such standards, NISO is uniquely positioned to help take altmetrics to the next level.”

Metrics and Altmetrics

For research, the only gauge for quality over the past 50 years has been the Impact Factor and related quality indicators originally developed by Eugene Garfield, an information scientist, founder of The Institute for Scientific Information, and one of the founders of bibliometrics and scientometrics. These measures, as interesting and valuable as they may be for their intended use, are clearly inadequate to meet the needs for assessing the value and impact of the entire research enterprise. However, giving Garfield his due, he was clearly far ahead of his time in trying to rationalize journal quality and beginning to determine factors that could be used to distinguish quality journals. These measures and efforts still build off the same groundwork that Garfield first experimented with a half-century ago.

Research and academe are changing just as radically as journalism, book publishing, and other areas: the rise of for-profit institutions, massive open online courses (MOOCs), social media, collaboration at global levels, open access (OA) publishing, independent online reviews of individual faculty and classes, etc. Public perception of the value and outputs of research and academe are also constantly in the news. With so much communication now done online (e.g., in tweets, blogs, and Facebook entries), research itself is moving to the internet.

Article-level metrics (ALMs) are one effort to hone impact by providing “much-needed new checks and balances, greater speed of feedback, and superior relationship mapping and influence tracking, none of which can be replicated by the traditional impact factor. They can form the basis of recommendation and collaborative filtering systems able to power navigation and discovery of articles synchronized to the needs of the researcher, publisher, institutional decision-maker, or funder,” according to the ALM website. At ALM, summits sponsored by the Public Library of Science (PLOS) have taken leadership in this area. However, the scholarly, peer-reviewed article is no longer the sole (or perhaps even the key) indicator of research excellence and value.

“When I coined the term ‘altmetrics,’ I certainly had no idea it’d resonate the way it has,” says Jason Priem, University of North Carolina–Chapel Hill doctoral student. “I like to think the work I’ve done since has played some part in this, but to be honest it really has much less to do with me, and much more to do with the broader zeitgeist. More specifically, I think altmetrics interest is tapping into (at least), first, the growth of interest in open and web-native scholarship, and second a cultural trend toward more big-data-driven, evidence-based decision-making in everything from campaign strategy to managing professional sports teams.”

Priem sees the NISO effort as “very well timed, given the growing interest in using altmetrics to inform policy and evaluation decisions. I think these uses of altmetrics are potentially very powerful—but they’re not without dangers. We mustn’t forget that it took many years to build a research base and practical infrastructure to support decision-making around citation metrics. Similarly, there’s a lot of work that needs to be done before we’ll realize the potential of altmetrics to reward open, diverse, and engaged scholarship. Inclusive, open, and spirited conversations around best practices and standards is a big part of that. So we’re very pleased at this announcement, and really looking forward to participating in these conversations.”

Altmetrics is still in its formative stages with much of the discussion still focused on whether altmetrics is complementary or alternative to traditional citation-based methods or if it is better seen as a set of tools to measure the various forms of communication for indicators of reach, value, and impact. Judy Luther explains in a recent blog that “altmetrics attempts to close that gap by providing more timely measures that are also more pertinent to the researcher and their article. Use metrics from downloads and blogs, and attention metrics such as tweets and bookmarks, can provide immediate indicators of interest. Although metrics associated with these activities are in the developmental stage, there is growing investment in the broader landscape to produce more current metrics that serve the researcher, their communities and funding agencies.”

Many people question whether this is really an “alternative” or just a natural development in the evaluation process. Traditional citation analysis has been, by definition, a retrospective process and one that focused more on the top authored papers and has made it possible to easily vet candidates for the Nobel Prize, for example. However, few get a Nobel Prize, and with more and better tools, the growing stores of Big Data, and increasing practices of ongoing evaluation and assessment of all aspects of the research enterprise, these efforts bring the promise of going beyond the inherent limits of citation analysis to being able to measure the contributions of anyone in the research environment. As such, the term “altmetrics” may be a misnomer since we are really looking at the ability to move from this very limited focus on the tip of the research iceberg to a much more democratic, inclusive understanding of the performance and requirements of quality research in all fields.

A Two-Year Plan

There are already studies that validate this direction. For example, a Canadian researcher studied tweets sent about new research publications and found that “tweets can predict highly cited articles within the first three days of article publication.” Another recent study found that blogging about press releases for new scholarly journal articles to journalists and bloggers “generates a positive impact on the number of citations that publicized journal articles receive.” A study of Twitter mentions of preprints from arXiv found “the volume of Twitter mentions is statistically correlated with arXiv downloads and early citations just months after the publication of a preprint, with a possible bias that favors highly mentioned articles.” Clearly, this is an area that deserves more attention.

“For altmetrics to move out of its current pilot and proof-of-concept phase, the community must begin coalescing around a suite of commonly understood definitions, calculations, and data sharing practices,” according to a statement from Todd Carpenter, NISO executive director. “We must agree on what gets measured, what the criteria are for assessing the quality of the measures, at what granularity these metrics are compiled and analyzed, how long a period the altmetrics should cover, the role of social media in altmetrics, the technical infrastructure necessary to exchange this data, and which new altmetrics will prove most valuable. The creation of altmetrics standards and best practices will facilitate the community trust in altmetrics, which will be a requirement for any broad-based acceptance, and will ensure that these altmetrics can be accurately compared and exchanged across publishers and platforms.”

The first phase of the project will be to bring together “two groups of invited experts in altmetrics research, traditional publishing, bibliometrics, and faculty assessment for in-person discussions with the goal of identifying key altmetrics issues and those that can best be addressed through standards or recommended practices,” which will be followed by public hearings, according to the NISO announcement. In Phase 2, the “report summarizing this input will identify the specific areas where NISO should develop standards or recommended practices, which will be undertaken by a working group” assembled at that point. The process will be open and updates will be available through the NISO website or its Newsline newsletter.

Early Reactions From the Experts

At this early stage, gauging future outcomes is difficult, but here are some thoughts and reactions from a few of the key players in today’s evolving metrics movement.

“Altmetrics will enrich the way we understand and build reputation—as distinguished from quality, incidentally,” according to Jean-Claude Guédon, scholarly communication proponent at the Université de Montréal. “This is a very positive development as impact factors play a very perverse role in our present research system. Impact factors generate the conditions for competition among journals that have been selected in a certain way by a self-selecting group of people (in this case the company Thomson Reuters). This competition has been unduly extended to individuals, and now to institutions and even countries. Furthermore, this competition, although it really yields nothing more than the x best people in a group (assuming you accept the procedure), but does nothing to improve the quality of the whole group (which is what a good management of scientific research should look for). Because it generates an intense competition context, impact factors induce a tendency toward short cuts and even cheating. Recent studies showing a positive correlation between impact factors of journals and article retractions are very worrisome in this regard. Finally, impact factors are presented with unrealistic expectations as to their precision, and has been ambivalently criticized by Garfield himself, and leads to an image of scientific objectivity that is really a fraud.”

Finbar Galligan, Serials Review columnist on altmetrics and Swets’ market specialist, believes that existing “research conducted around altmetrics has already shown close correlations with citations, which validates their value to the scholarly landscape, their potential to highlight new layers of impact that have been previously ignored, and actively contribute to the success of research in the future. This announcement is great news for the wider academic and scholarly community who may be aware of altmetrics, but are not yet active in using them, in that backing from NISO and the Sloan Foundation give more credibility to the metrics and their potential. Standardization is likely to increase the accessibility and understanding of altmetrics, and will hasten their transition from being ‘alternative’ to ‘established.’”

NISO’s Carpenter says that “the field of alternative metrics has been growing at a rapid pace for a few years now with a variety of established players now providing services in this space. However, each provider is gathering different data from different sources and calculating metrics in their own unique way. If the community is going to adopt these new metrics, administrators, granting organizations and others who rely on performance assessment need accepted, trustworthy and verifiable information. The best way to foster that culture of trust and thereby adoption is to have community consensus standards that ground at least a suite of agreed-upon metrics,” he says.

“Some have made the comment that it is too early for standardization efforts and that the community is still experimenting in this area,” says Carpenter. “While that is true in some regards, realistically this project will take about two and half years to complete. If alternative metrics hasn’t coalesced around some core infrastructure, definitions and collections methods by the point this project is completed, their long-term adoption would be significantly compromised. NISO is a global organization with nearly 20% of our membership based outside of the US,” according to Carpenter. “We absolutely will be working with international partners and developers from the outset. It is conceivable that some of this work might be advanced in international communities, such as ISO, as NISO serves as the conduit for the ISO technical subcommittee (ISO TC 46/SC 8) developing international standards in this space, although it is far too soon to be headed down that path.”

PLOS’ Cameron Neylon, who was involved in early discussions on altmetrics with NISO and advised the organization on its grant proposal, notes that this initiative is “a positive one and will help us take both the research field and the infrastructure of altmetrics and article level metrics forward. It is always difficult to pick the moment when it is time to start thinking about developing standards in a nascent area such as this. We can’t wait until we need standards to start work on them, so I feel it is valuable to start the process of identifying the questions that we need to ask, thinking about the community we need to build around that, and identifying how best to take things forward together.”

Dario Taraborelli from the Wikimedia Foundation collaborated with Jason Priem on the development of altmetrics and sees the NISO effort as “a unique opportunity for different players in the field to lay the foundation of an open science of impact measurement and I am delighted to see NISO and the Sloan Foundation drive this effort. To achieve its vision of a broader, more granular and diverse measurement of impact, the altmetrics community needs to make transparency and openness its core values,” he says. “Without standards to define and measure ‘reuse’ or ‘informal citation’ we can hardly expect new impact metrics to be implemented by funding and evaluation organizations or to complement well-established citation-based indicators. We also need to create a future where research evaluation does not fall prey to proprietary indicators and where impact metrics can themselves be assessed for their merit.”

Martin Fenner, Hannover Medical School oncologist and technical lead for the PLOS Article Level Metrics project, reports that “while we have standards and best practices for citations and usage stats, we have just started to think about these issues in the field of altmetrics. There is a lot of interest in altmetrics, and we have moved from the initial stage where they were new and exciting to the next stage, which is about integrating them into existing services and workflows, getting a better understanding what they can do and can’t do, and best practices.” For Fenner, “The NISO initiative will be very important both in developing best practices, but also as a community-building effort for this still small community to work together on important issues. Some people may feel that it is a bit early to talk about standardization, but they shouldn’t forget that this initiative is intended to start the process, which we all know from similar projects will take several years before becoming anything like a formal standard.”

Still, there are concerns about the emerging state of the field. Paul Groth, VU University Amsterdam computer scientist, advocates “approaches for dealing with large amounts of diverse contextualized knowledge with a particular focus on the web and e-Science applications.” He notes that “it is exciting to see the continued interest in altmetrics and their further development. However, altmetrics is still relatively new and I wonder if this call for standardization is too early. In its current state, experimentation and research still seem necessary. That said, I look forward to seeing how NISO proceeds.”

Stewart Wills, editorial director for web and new media with the American Association for the Advancement of Science’s Science, explains that “as the information environment has changed, and as new forms of scientific output beyond traditional research papers have become important, there’s been a drive to find new ways to measure the ‘impact’ of scientific output and articles beyond journal-centric citation metrics, to include measurements ranging from article usage on journal Web sites to downstream conversation in social-media channels such as Twitter. But how do you actually assess and compare those different indices? How do you quantify the importance of something as ephemeral as a tweet on Twitter relative to something like a citation? These aren’t really easy questions, or even, as I see it, particularly well posed ones.”

Wills thinks “it’s great that NISO has grasped the nettle here and is going to work on this issue. NISO really has a great track record in developing standards for the publishing and information industries—cases in point being their recent work with NLM [the National Library of Medicine] to make the Journal Article Tag Suite (JATS) a NISO standard, and its work with NFAIS [the National Federation of Advanced Information Services] on recommendations for journal supplemental materials. The fact that NISO is looking at crafting some standards here seems like a major step forward for altmetrics, and toward the goal of making them no longer ‘alt’, if that makes any sense,” he says.

The Only Constant Is Change

If you haven’t been to college for awhile or don’t work in a research environment, you may not be aware of all of the changes that are taking place. In the past 2 decades, the number of nonprofit and for-profit institutions of higher education has grown dramatically. According to 2013 figures from the Department of Education, there are now 32,750 accredited postsecondary institutions and programs in the U.S. Twenty years ago, 75% of faculty and researchers held tenure-track positions. Today, due to economic factors, some estimate that 75% of faculty and researchers are now on annual appointments—something that makes getting research funding and grants nearly impossible as well as getting fewer benefits and less job security. For these people, having a resume that focuses on them individually in terms of accomplishments, value, and performance is of growing importance.

“The true measure of scholarly communication is its impact on the scholarly community,” says Jeffrey Beall, University of Colorado Denver’s scholarly initiatives librarian, “not on society as a whole. I hope the standards to be developed take this into account. Otherwise, I envision a scenario where an article granting legitimacy to a pseudo-science like astrology could earn very high ALMs. Such a scenario would be counter-productive and harmful to science in the long term. The scientific community must not let the measurement of scientific impact deteriorate into a popularity contest.”

Efforts to understand the organization and operation of the scientific enterprise are not new. In 1977, Thomas Kuhn wrote in The Structure of Scientific Revolutions about the nature of scientific organizations, noting that research communities were complex systems—not just complicated networks —with many components (that don’t necessarily need to be found in each organization) and that these change over time. To Kuhn, science was more of a family of models notable for both “formal and information communication.”

John Desmond Bernal, in his Social Function of Science (1939), stressed the social nature of science—as an international community of networked professionals in which the rewards and other aspects were shared and the community held together by mutual trust and tradition. Today with social media metrics, we may be able to better understand the forms of interaction, sharing, and exploration that have been impossible until now. Being able to better study the research enterprise is not only valuable in terms of justifying impact but in potentially learning more about benchmarks for excellence and to potentially use this information to improve the work and effectiveness of these organizations.

Even when the process of standardizing concepts and methods of research impact are in place, it won’t be the end of the story. In looking at the new measures of journalism’s impact, Ben Colmery recently noted that “there is tech-based impact measurement already happening in the form of Web analytics, social mentions and related metrics. There are dashboards to let us watch it all in one place. But these measure impact on attention. The model and technology are still very nascent for measuring impact on behavior and policy beyond the website, the TV program, the mobile app.” However, NISO’s efforts represent an important step in the right direction.

Publishing consultant Joseph Esposito notes that “this is a good initiative,” and Todd Carpenter and NISO are in the right place for this discussion to take place. “The key thing in any investigation will be to define the terms,” says Esposito. “What precisely is being measured? How granular should the measurements be? The complicating factor is that there is and will be a tendency for some participants in this discussion to try to define the terms such that they imply certain outcomes. Another issue is how dynamic the field is. There will need to be an allowance for changes in media and the way we measure it. Certainly the world of new media cannot be frozen at this time. Ultimately the real issue will be how altmetrics not only measures content and usage that is already being created but how the feedback loop works. How will these new measurements influence the kinds of content that get created? This is the kind of project that begins with measurement, but winds up in the editorial department.” Time will tell.


Nancy K. Herther is a research consultant and writer who recently retired from a 30-year career in academic libraries.

Email Nancy K. Herther

Related Articles

6/28/2012Plum Analytics Maps Success in Open Access Scholarship
8/13/2012Mendeley Institutional Edition Adds Altmetric Feature for Librarian Users
11/15/2012Berlin 10 Open Access Conference Recap
4/15/2013Elsevier Acquires Mendeley
5/23/2013Wiley Begins Trial of Alternative Metrics on Subscription and Open Access Articles
8/6/2013NISO Publishes Fifth Edition of Data Dictionary
9/12/2013Public Free to Comment on NISO Draft
10/24/2013New Bibliography Provides List of Altmetrics Sources
1/21/2014KNODE and Wiley Collaborate on Next-Generation Research Portal
8/15/2017Jeffrey Beall Speaks Out
1/31/2019The Institute for Scientific Information Unveils First Global Research Report


Comments Add A Comment

              Back to top