Abstract
The increase in the availability of data about how research is discussed, used, rated, recommend, saved and read online has allowed researchers to reconsider the mechanisms by which scholarship is evaluated. It is now possible to better track the influence of research beyond academia, though the measures by which we can do so are not yet mature enough to stand on their own. In this article, we examine a new class of data (commonly called “altmetrics”) and describe its benefits, limitations and recommendations for its use and interpretation in the context of research assessment. This article is published as part of a collection on the future of research assessment.
Introduction
To date, academia’s traditional framework for attempting to understand influential scholarship has lacked a concern with “real world” impacts. In the sciences, supplementing peer review by considering researchers’ expert-granted awards and their publications’ citation-based metrics has meant that their influence has only been measured among other researchers. In the social sciences, arts and humanities, research assessment has been similarly limited to tracking impact within academia, using the prestige of one’s publisher as a proxy for the importance of one’s work (again, as a supplement for peer review practices).
What of influence beyond the academy?
In recent years, the increased use of the social web by scholars and civilians alike to discuss research has meant that it is now possible to broaden our understanding of what makes for “influential” scholarship. By mining this scholarly Big Data, described popularly as “altmetrics”, we can start to understand influence beyond what has traditionally been recognized, seeing researchers’ marks on culture, policy, the economy and education.
The rise in available, diverse impact data has been met by an increased demand upon researchers to prove that the work they are pursuing is of relevance to “the real world”. Funding agencies, governments, and even university administrators are now tasking researchers with showcasing the value of what they do beyond the academy. While this trend has been decried as “injurious neoliberalism” by some (Gill, 2009), others have welcomed a change that better rewards them for pursuing research that has a direct effect upon society, and for doing outreach to the public (Terras, 2012; Howard, 2013; Piwowar, 2013).
In this article, I will examine the use of altmetrics in evaluating research.
Altmetrics complement the dominant understanding of influence
Mapping researchers’ influence on the Internet has been of concern since at least the late 1990s (Cronin et al., 1998). Altmetrics as a concept, however, is much younger, having been first articulated in 2010 by a group of scientists in the Altmetrics Manifesto (Priem et al., 2010). The authors point out that in using data from the social web, we can start to track and quantify interactions with scholarship that were previously invisible:
… that dog-eared (but uncited) article that used to live on a shelf now lives in Mendeley, CiteULike, or Zotero—where we can see and count it. That hallway conversation about a recent finding has moved to blogs and social networks—now, we can listen in. The local genomics dataset has moved to an online repository—now, we can track it. This diverse group of activities forms a composite trace of impact far richer than any available before. We call the elements of this trace altmetrics (Priem et al., 2010).
No widely accepted formal definition for altmetrics exists. Thelwall and Kousha (2015a) have characterized altmetrics as being “derived from social media (for example, social bookmarks, comments, ratings, tweets)” and distinct from “web citations in digitised scholarly documents (for example, eprints, books, science blogs or clinical guidelines)”—that is, references to research within online sources that are formally “cited” in the manner that references appear in formally published, peer reviewed literature. Holmberg (2014) has similarly defined altmetrics as pertaining specifically to social media. Moed (2015), on the other hand, casts a much larger net in defining altmetrics, defining them simply as “traces of the computerization of the research process”; the NISO Altmetrics Initiative (2016) has similarly offered a broad and all-encompassing definition in a recent report. At least one definition lies in between the two extremes: Haustein (2016) sees altmetrics as overlapping with various types of informetrics (scientometrics, webometrics and bibliometrics, to name a few), having both distinctions from each and sharing characteristics with each.
There are a number of characteristics that apply across the board to to altmetrics. Altmetrics are noted for being quick to accumulate, available for any research output format (that is, not just journal articles or books, but also datasets, software and presentations), and useful for understanding the use and uptake of scholarship among many audiences (Priem et al., 2010; Sud and Thelwall, 2013; Kousha and Thelwall, 2015a, b).
Altmetrics are also, by their very nature, diverse and ever-changing: as Moed (2015) points out, anything that can be text-mined from the Web is potentially a type of altmetric; as such, there is no canonical set list of websites or data sources that comprise “proper” or “real” altmetrics. To date, researchers have studied the following data sources under the banner of altmetrics:
-
Social media: Twitter, Facebook, Sina Weibo, LinkedIn, YouTube, Vimeo, Reddit, Pinterest, Wikipedia and Google+
-
Expert reviews and recommendations: Faculty of 1,000 Prime, PubPeer and Publons
-
Social bookmarking sites (scholarly and general interest): Mendeley, CiteULike, Connotea and Delicious
-
Grey literature citations: references to research that appear in public policy documents, technical handbooks and clinical care guidelines
-
Mainstream media
-
Research and general interest blogs
However, as we examine below, the diversity of data represented under the nebulous umbrella term “altmetrics” means that various types of altmetrics data can mean very different things depending on who is using research, what they are doing with it, and what implications their use has for understanding that research’s influence upon the world.
Patterns exist in how research is used online, which can expose so-called “flavors of impact” for scholarship (Piwowar, 2012; Priem et al., 2012). One study (Priem et al., 2012) found that at least four distinct “flavors” exist for scientific publications: items that had been “Read, bookmarked, and shared” online; items that had been “Read and cited”; “expert picks”, which have been recommended on Faculty of 1,000 Prime, bookmarked on Mendeley, and otherwise used by scholars; and “popular hits”, which have been read often and shared on social media, but have not seen much attention from scholarly social networks. Other researchers have pointed to the ability to text-mine syllabi (Kousha and Thelwall, 2015b), book reviews (Zuccala et al., 2015; Kousha and Thelwall, 2015c) and Mendeley bookmarks (Mohammadi et al., 2016) as ways to track the “flavors” of educational impact, public popularity, and scholarly readership and intent to cite, respectively.
For altmetrics related to books and journal articles, a commonly asked question is, “Does this indicator correlate to citations?” Mostly, these metrics do not. Researchers have found only moderate correlations between citations and Mendeley readership (Li and Thelwall, 2012; Priem et al., 2012), Faculty of 1,000 Prime ratings (Priem et al., 2012; Waltman and Costas, 2014), and mentions to research in scholarly blogs (Thelwall et al., 2013; Shema et al., 2014). Weak and even negative correlations exist for indicators like tweets (Thelwall et al., 2013; Bornmann, 2015), Facebook posts (Priem et al., 2012; Ringelhan et al., 2015), and LinkedIn mentions (Thelwall et al., 2013).
This lack of correlation between most altmetrics and citations is a significant finding, but not for the reason that most assume. Some researchers have suggested that the lack of strong correlation shows us that altmetrics can help us uncover new “flavors of impact”, beyond the scholarly impact that we have traditionally been preoccupied with (Priem et al., 2012; Thelwall et al., 2013). However, other researchers have cautioned that far more research (such as source content analysis or creator interviews) is needed to fully understand the nature of attention and impact that various altmetrics represent (Sud and Thelwall, 2013; Bornmann, 2016).
The broad net cast by altmetrics also allows for contributions by software developers, data curators and other collaborators on the average research project to be better credited for their work. Often, these collaborators are crucial to a research project but do not contribute to writing related articles or books. Thus, in a system focused on counting citations or reading publisher bylines—one that rewards authorship rather than contributorship—these “non-traditional” researcher roles cannot be properly recognized. Were we to imagine a system where hard evidence showcasing the value of all types of contributions were accepted for professional advancement (the number of users of their software (Singh Chawla, 2016), adaptations of their datasets (Peters et al., 2016) and so on), these researchers could get the credit they deserve.
Full of promise, but currently imperfect
The many benefits of altmetrics as a class of complementary impact metrics should not overshadow their current limitations, which have been identified by Wouters and Costas (2012) as including:
-
They do not “meet crucial requirements for data quality and indicator construction” … meaning that certain “web based tools may create statistics and indicators on incorrect data, without being possible for the user to detect or correct the data properly”;
-
In general, few altmetrics data sources normalize their data, making cross-discipline comparisons difficult; and
-
Most tools are not transparent about data coverage (that is, what disciplines are included, what sources are indexed, or other such details about how data is gathered).
Moreover, in many ways altmetrics’ limitations mirror those of other quantitative impact metrics:
-
They mean little in isolation.
-
They are subject to disciplinary and other biases.
-
They can be gamed. And
-
Though we use the shorthand of “impact metrics” to describe altmetrics, they actually measure attention, not true impact.
Much like citation counts, altmetrics cannot be properly interpreted if they are used in isolation. After all, do 13 Wikipedia mentions for a research article mean that an article is performing well or poorly? One has to use disciplinary and age-based comparisons to truly understand these numbers. Those 13 Wikipedia mentions for a biomedical research article published last year may turn out to be a lot, if the average article in that discipline, published in that same time frame, has only received 4 Wikipedia mentions. Citation-based indicators like the Source Normalized Impact per Paper and Scimago Journal Rank were created for a similar reason: to allow for discipline- and age-appropriate comparisons of research articles and journals (Falagas et al., 2008; Moed, 2011).
To provide the necessary context to altmetrics, several approaches have been used and recommended to date. Hicks et al. (2015) recommend the use of percentiles, in particular, as a means for providing such context. Percentiles are favored by altmetrics services like Altmetric and Impactstory (that is, “This article has a high score compared to outputs of the same age and source (97th percentile)”). Researchers have proposed normalized counts for Mendeley readership (Bornmann and Haunschild, 2016a; Haunschild and Bornmann, 2016) and Twitter mentions (Bornmann and Haunschild, 2016b), allowing cross-discipline and cross-time comparisons to be made. Sentiment analysis of altmetrics like tweets have also been proposed as a means to better understand what is actually being said about a piece of research on a wide scale (Friedrich et al., 2015). The use of “baskets of metrics” has been recommended by groups like the HEFCE Metrics Review panel (Wilsdon et al., 2015) and the Snowball Metrics initiative (Colledge, 2014), encouraging researchers to use many related, appropriate metrics at once to showcase particular “flavors of impact” for their scholarship.
Another challenge of altmetrics lies in their disciplinary biases. Research in biomedical science, social science and the humanities have been shown to garner more online attention than scholarship from other disciplines (Haustein et al., 2015), making cross-disciplinary comparisons with raw metrics impossible without the use of percentiles or weighted indicators. Research has shown that certain altmetrics currently reflect gender (Paul-Hus et al., 2015) and regional biases (Alperin, 2015).
Gaming is another concern for altmetrics, though to date there have not been any major cases of purposeful manipulation of altmetrics for personal gain. Perhaps the biggest altmetrics gaming danger lies in benevolent Twitter bots (accounts set up to tweet when new papers are added to a repository or when research on a particular topic is discussed in the media), which one study has shown account for upwards of 9% of all tweets related to papers submitted to ArXiv in 2012 (Haustein et al., 2016). Gaming of pageviews and downloads has also been of concern to publishers and repositories (Gordon et al., 2015).
Similarly, legitimate self-promotion can have an effect upon an article’s altmetrics. As Adie (2013) explains, gaming exists on a spectrum along with other activities that can potentially showcase the value of research, both directly and indirectly. These activities break down into four general themes: (Fig. 1)
-
Legitimate Promotion (intent exists, value added): “Alice has a new paper out. She asks those grad students of hers who blog to write about it”.
-
Spam (no intent, no value): Spam networks pick up legitimate posts at random from others and replicate them, hoping to fool content-based analysis systems into thinking that they are real users. This is by far the most common scenario we [Altmetric] see.
-
Gaming (intent exists, no value): “Alice has a new paper out. She believes that it contains important information for diabetes patients and so signs up to a “100 retweets for $$$” service”.
-
Incidental (no intent, value but not directly related to the article): “Just tried to access paper x but hit the paywall. Retweet if you hate all paywalls!” (Adie, 2013)
An illustration of the various types of gaming that can happen. Source: Adie (2013).
Though gaming is relatively rare, altmetrics services are taking steps to prevent the practice (Adie, 2013; Gordon et al., 2015).
By far, the biggest current limitation to altmetrics is that they are understood to measure attention, not impact. That is, altmetrics tend to be comprised of metrics that can indicate if many people are reading or discussing research, but include few metrics that can indicate if research findings are being utilized and making a positive effect upon the world (Sugimoto, 2015). This is another trait that altmetrics have in common with citations, which alone are not always a good indicator for impactful research; after all, citations can occur for many reasons (Cronin, 1984; Bornmann and Daniel, 2008).
However, we cannot rule out the possibility that certain types of altmetrics data may, with further study, be found to be early indicators of “real world” or non-traditional scholarly impact. Beyond the use of altmetrics as signals for “attention” (itself a vague concept), little is known of the motivations that underpin the actions online that result in altmetrics (Sud and Thelwall, 2013; Bornmann, 2016). Bornmann and Haunschild (forthcoming), in exploring the applicability of the Leiden Manifesto principles to altmetrics, have pointed out that altmetrics are, in theory, better suited than citations to “measure performance against the research missions of the institution, group, or researcher”. However, as discussed above, more research in the way of content analyses and other investigative methods are needed to confirm the meaning of such altmetrics and to map those meanings to various impact types.
Though altmetrics currently share many of the same limitations of citations—making them poor choices for a quantitative means for understanding true research impact—these drawbacks are not immutable. As these relatively young metrics mature—and as the services that provide them mature, as well—it is possible that we will start to encounter improved altmetrics, with context, clean (not gamed) data, and accurate impact measures baked-in from the start.
Recommendations for using altmetrics
Though altmetrics currently share many of the same limitations as citation-based metrics, there are a number of ways that the use of altmetrics can improve upon the use of their bibliometric predecessors. Following are recommendations specifically for researchers on how to keep altmetrics from becoming just another set of numbers that academics need to boost. These recommendations draw upon and overlap with previous recommendations made on the use of altmetrics in evaluation scenarios (Colledge, 2014; Thelwall, 2014; Wilsdon et al., 2015). Researchers should keep these recommendations in mind when using altmetrics to demonstrate the attention to and impact of their work, and administrators and reviewers should also keep them in mind when interpreting research impact metrics.
Recommendation 1: Always use altmetric counts in context
As described above and recommended in the Leiden Manifesto (Hicks et al., 2015), the best way to contextualize any metric is to compare it with averages for research published in the same discipline, year or even against authors of the same gender or nationality (given biases that exist for all characteristics (Konkiel, 2016)). Some altmetrics services (namely, Altmetric and Impactstory) offer predetermined performance percentiles for all altmetrics they provide based on year and, in the case of Altmetric, upon discipline as well. The Public Library of Science (PLOS) journals all offer a similar feature for graphs of page views and downloads (Fig. 2).
Where such pre-calculated percentiles do not already exist, it is possible to collect and calculate these contextual numbers manually (Bornmann and Haunschild, 2016a, b). However, it is recommended that this task be undertaken with the help of a bibliometrics expert such as a librarian.
Another important dimension to context is the consideration of purposes for which altmetrics may be used to document the attention or influence of research. The use of a metric (or “basket of metrics”) have different implications when used to make funding decisions (Thelwall et al., 2016) as opposed to promotion and tenure decisions (Konkiel et al., 2016) or national evaluation exercises (Thelwall, 2014), for example. Researchers and evaluators should always bear this in mind.
Recommendation 2: Use altmetrics to find compelling impact evidence
Though quantitative altmetrics cannot themselves currently serve as evidence of true impact, some metrics can signal that a lot of attention is being paid to research, and in turn that “pathways to impact” exist. Examples of such pathways can include media coverage for a book, which in turn can lead to downstream cultural impact, enriching the lives of the public; citations to a journal article in public policy documents, which can be read to discover if governments are enacting laws based on research; or patient advocacy groups sharing a journal article on Twitter, which may help those affected with a disease to improve their health.
Such “pathways to impact” evidence lies in the qualitative data of which altmetrics are comprised. It is up to individuals to find those “gems” of impact evidence by using metrics to discover when attention is being paid to research in the first place.
Recommendation 3: Use “baskets of metrics,” rather than one number in isolation
No single number can summarize the many flavours of impact of research, nor can it even capture the various gradients that exist within a single flavour (Hicks et al., 2015). For example, in showcasing interest from clinicians, a public health researcher might include PubMed Central pageviews, tweets from practitioners, and references to an article in Wikipedia (which over half of all doctors reportedly consult when making diagnoses (Beck, 2014)) to showcase distinct uses of an article among a particular stakeholder group: readership, discussion and use in practice. Such diverse uses cannot be communicated in a single number. As such, it is up to researchers to create their own “baskets of metrics” to communicate impact, comprised on appropriate indicators of attention and influence among a specific audience (Wilsdon et al., 2015). Starting places for assembling these “baskets” can be found in the Snowball Metrics Recipe Book (Colledge, 2014) or by creating an Impactstory profile, which offers badges highlighting attention types (Fig. 3).
Recommendation 4: Advocate for altmetrics as opportunity, not evaluation
There is worry among academics that altmetrics may become just another evaluative mechanism: a set of required benchmarks imposed by administrators, another suite of numbers (like citations or the h-index) that one needs to worry about. Some warn that by requiring altmetrics to be reported in evaluations, that “academics and research support offices [will be pushed] towards wasting their time trying to attract tweets etc. to their work” (Thelwall, 2014). However, this does not have to be the case.
It is in the power of faculty councils, department chairs, grant review boards, and hiring and promotion committees—groups led by researchers themselves—to declare that altmetrics should only be used as a voluntary growth mechanism: a means to understand where they are succeeding and to share that attention and those pathways to impact with others. Thelwall (2014) has suggested that altmetrics can be “particularly valuable for social impact case studies but can also be useful to demonstrate educational impacts for research”. By insisting that altmetrics be an option, not a requirement, to use in promotion and tenure dossier preparation guidelines, job applications, grant proposals and other professional advancement opportunities, researchers can retain control over the appropriate use of these metrics.
Recommendation 5: Evaluators should use and interpret altmetrics carefully
In cases where researchers find themselves in the evaluator’s chair—whether on grant review panels, search committees, or other such scenarios—there is an important principle to keep in mind with regard to interpreting altmetrics. Experts have recommended that metrics should “supplement, not supplant, expert judgement” (Hicks et al., 2015; Wilsdon et al., 2015). Thelwall (2014) adds, “[A]ssessors should use the alternative metrics to guide them to a starting position about the impact of the research but should make their own final judgement, taking into account the limitations of alternative metrics”.
Conclusion
Altmetrics are a new class of research impact and attention data that can help researchers understand their influence and share it with others, for a variety of purposes. Though altmetrics currently have limitations to their formulation and use, these relatively young metrics are still evolving and may soon be more accurate measures of true research impact than their bibliometric predecessors. Until that day, researchers considering using altmetrics should follow a number of recommendations that can make a difference in their proper use, preventing abuse.
Data availability
Data sharing not applicable to this article as no datasets were generated or analysed during the current study.
Additional information
How to cite this article: Konkiel S (2016) Altmetrics: diversifying the understanding of influential scholarship. Palgrave Communications 2:16057 doi: 10.1057/palcomms.2016.57.
References
Adie E (2013) Gaming altmetrics, Altmetric.com Blog, http://www.altmetric.com/blog/gaming-altmetrics/, accessed 15 April 2016.
Alperin JP (2015) Geographic variation in social media metrics: An analysis of Latin American journal articles. Aslib Journal of Information Management; 67 (3): 289–304.
Beck J (2014) Doctors’ #1 source for healthcare information: Wikipedia, The Atlantic, http://www.theatlantic.com/health/archive/2014/03/doctors-1-source-for-healthcare-information-wikipedia/284206/, accessed 29 April 2016.
Bornmann L (2015) Alternative metrics in scientometrics: A meta-analysis of research into three altmetrics. Scientometrics; 103 (3): 1123–1144.
Bornmann L (2016) What do altmetrics counts mean? A plea for content analyses. Journal of the Association for Information Science and Technology; 67 (4): 1016–1017.
Bornmann L and Daniel H-D (2008) What do citation counts measure? A review of studies on citing behavior. Journal of Documentation; 64 (1): 45–80.
Bornmann L and Haunschild R (2016a) Normalization of Mendeley reader impact on the reader-and paper-side: A comparison of the mean discipline normalized reader score (MDNRS) with the mean normalized reader score (MNRS) and bare reader counts. Journal of Informetrics; 10 (3): 776–788.
Bornmann L and Haunschild R (2016b) How to normalize Twitter counts? A first attempt based on journals in the Twitter Index. Scientometrics; 107 (3): 1405–1422.
Bornmann L and Haunschild R (forthcoming) “To what extent does the Leiden Manifesto also apply to altmetrics? A discussion of the manifesto against the background of research into altmetrics”, Online Information Review. Preprint accessed via Figshare, http://doi.org/10.6084/m9.figshare.1464981.v2, accessed, 15 April.
Colledge L (2014) Snowball Metrics Recipe Book. Snowball Metrics Program Partners: Amsterdam, the Netherlands.
Cronin B (1984) The Citation Process: The Role and Significance of Citations in Scientific Communication. Taylor Graham: London, pp 1–103.
Cronin B, Snyder HW, Rosenbaum H, Martinson A and Callahan E (1998) Invoked on the web. Journal of the American Society for Information Science; 49 (14): 1319–1328.
Falagas ME, Kouranos VD, Arencibia-Jorge R and Karageorgopoulos DE (2008) Comparison of SCImago journal rank indicator with journal impact factor. FASEB Journal: Official Publication of the Federation of American Societies for Experimental Biology; 22 (8): 2623–2628.
Friedrich N, Bowman TD, Stock WG and Haustein S (2015) Adapting sentiment analysis for tweets linking to scientific papers. Paper presented at the 15th International Society of Scientometrics and Informetrics Conference (ISSI 2015), http://arxiv.org/abs/1507.01967, accessed 15 April 2016.
Gill R (2009) Breaking the silence: The hidden injuries of neo-liberal academia. In: Gill R and Ryan-Flood R (eds). Secrecy and Silence in the Research Process: Feminist Reflections. Routledge: New York.
Gordon G, Lin J, Cave R and Dandrea R (2015) The question of data integrity in article-level metrics. PLoS Biology; 13 (8): e1002161.
Haunschild R and Bornmann L (2016) Normalization of Mendeley reader counts for impact assessment. Journal of Informetrics; 10 (1): 62–73.
Haustein S (2016) Exploring the meaning of altmetrics. Paper presented at the Force11 conference, http://doi.org/10.6084/m9.figshare.3180367.v1, accessed 18 April 2016.
Haustein S, Bowman TD, Holmberg K, Tsou A, Sugimoto CR and Larivière V (2016) Tweets as impact indicators: Examining the implications of automated ‘bot’ accounts on Twitter. Journal of the Association for Information Science and Technology; 67 (1): 232–238.
Haustein S, Costas R and Larivière V (2015) Characterizing social media metrics of scholarly papers: The effect of document properties and collaboration patterns. PloS One; 10 (3): e0120495.
Hicks D, Wouters P, Waltman L, de Rijcke S and Rafols I (2015) Bibliometrics: The Leiden Manifesto for research metrics. Nature; 520 (7548): 429–431.
Holmberg K (2014) “The meaning of altmetrics”, Proceedings of the IATUL Conferences, http://docs.lib.purdue.edu/iatul/2014/altmetrics/1/, accessed 28 June 2016.
Howard J (2013) Rise of “altmetrics” revives questions about how to measure impact of research, The Chronicle of Higher Education, 3 June.
Konkiel S (2016) Research evaluation’s gender problem—and some suggestions for fixing It, Digital Science—Perspectives, 7 June.
Konkiel S, Sugimoto CR and Williams S (2016) The use of Altmetrics in promotion and tenure, Educause Review, March/April, pp 54–55.
Kousha K and Thelwall M (2015a) Web indicators for research evaluation. Part 3: Books and non standard outputs. El Profesional de La Información; 24 (6): 724–736.
Kousha K and Thelwall M (2015b) An automatic method for assessing the teaching impact of books from online academic syllabi, Journal of the Association for Information Science and Technology, http://doi.org/10.1002/asi.23542.
Kousha K and Thelwall M (2015c) Alternative metrics for book impact assessment: Can Choice reviews be a useful source? Proceedings of 15th international conference on scientometrics and informetrics, pp 59–70.
Li X and Thelwall M (2012) F1000, Mendeley and traditional bibliometric indicators. Proceedings of 17th International Conference On Science and Technology Indicators; pp 542–551.
Moed HF (2011) The source normalized impact per paper is a valid and sophisticated indicator of journal citation impact. Journal of the American Society for Information Science. American Society for Information Science; 62 (1): 211–213.
Moed HF (2015) Altmetrics as traces of the computerization of the research process”, arXiv [cs.DL], 17 October, http://arxiv.org/abs/1510.05131, accessed 15 April 2016.
Mohammadi E, Thelwall M and Kousha K (2016) Can Mendeley bookmarks reflect readership? A survey of user motivations. Journal of the Association for Information Science and Technology; 67 (5): 1198–1209.
NISO Altmetrics Initiative Working Group A. (2016) Altmetrics definitions and use cases, draft for public comment, National Information Standards Organization (NISO), http://www.niso.org/apps/group_public/download.php/16268/NISO%20RP-25-201x-1,%20Altmetrics%20Definitions%20and%20Use%20Cases%20-%20draft%20for%20public%20comment.pdf.
Paul-Hus A, Sugimoto CR, Haustein S and Larivière V (2015) Is there a gender gap in social media metrics? Proceedings of ISSI 2015-15th International conference of the international society for scientometrics and informetrics, pp 37–45.
Peters I, Kraker P, Lex E, Gumpenberger C and Gorraiz J (2016) Research data explored: An extended analysis of citations and altmetrics. Scientometrics; 107 (2): 723–744.
Piwowar H (2012) 31 Flavors of research impact through #altmetrics. Research Remix, 31 January.
Piwowar HA (2013) No more waiting! Tools that work today to reveal dataset use, Research Data Access & Preservation Summit, Baltimore, MD, http://www.slideshare.net/asist_org/rdap13-piwowar-tools-that-work-today-to-reveal-dataset-use, accessed 15 April 2016.
Priem J, Piwowar HA and Hemminger BM (2012) Altmetrics in the wild: Using social media to explore scholarly impact. Digital Libraries, 20 March, http://arxiv.org/abs/1203.4745, accessed 15 April 2016.
Priem J, Taraborelli D, Groth P and Neylon C (2010) Altmetrics: a manifesto, http://altmetrics.org/manifesto/, accessed 15 April 2016.
Ringelhan S, Wollersheim J and Welpe IM (2015) I like, I cite? Do facebook likes predict the impact of scientific work? PloS One; 10 (8): e0134389.
Shema H, Bar-Ilan J and Thelwall M (2014) Do blog citations correlate with a higher number of future citations? Research blogs as a potential source for alternative metrics. Journal of the Association for Information Science and Technology; 65 (5): 1018–1027.
Singh Chawla D (2016) The unsung heroes of scientific software. Nature; 529 (7584): 115–116.
Sud P and Thelwall M (2013) Evaluating altmetrics. Scientometrics; 98 (2): 1131–1143.
Sugimoto C (2015) Attention is not impact and other challenges for altmetrics, Wiley Exchange, Vol. 24, 24 June, https://hub.wiley.com/community/exchanges/discover/blog/2015/06/23/attention-is-not-impact-and-other-challenges-for-altmetrics.
Terras M (2012) Is blogging and tweeting about research papers worth it? The Verdict, Melissa Terras—Adventures in Digital Cultural Heritage, 3 April, https://melissaterras.org/2012/04/03/is-blogging-and-tweeting-about-research-papers-worth-it-the-verdict/, accessed 29 April 2016.
Thelwall M (2014) Alternative metrics in the future UK Research Excellence Framework, Altmetrics for Evaluations, http://altmetrics.blogspot.com/2014/08/alternative-metrics-in-future-uk.html, accessed 28 June 2016.
Thelwall M, Haustein S, Lariviere V, Sugimoto CR, Larivière V and Sugimoto CR (2013) Do altmetrics work? Twitter and ten other social web. PloS One; 8 (5): e64841.
Thelwall M and Kousha K (2015a) Web indicators for research evaluation. Part 1: Citations and links to academic articles from the Web. El Profesional de La Información; EPI SCP 24 (5): 587–606.
Thelwall M and Kousha K (2015b) Web indicators for research evaluation. Part 2: Social media metrics. El Profesional de La Información; 24 (5): 607–620.
Thelwall M, Kousha K, Dinsmore A and Dolby K (2016) Alternative metric indicators for funding scheme evaluations. Aslib Journal of Information Management; 68 (1): 2–18.
Waltman L and Costas R (2014) F1000 recommendations as a potential new data source for research evaluation: A comparison with citations. Journal of the Association for Information Science and Technology; 65 (3): 433–445.
Wilsdon J et al. (2015) The Metric Tide: Report of the Independent Review of the Role of Metrics in Research Assessment and Management. HEFCE: London, p 176.
Wouters P and Costas R (2012) Users, Narcissism and Control—Tracking the Impact of Scholarly Publications in the 21st Century., Proceedings of the 17th International Conference on Science and Technology Indicators, presented at the Montreal, Quebec, Canada, pp 487–497.
Zuccala AA, Verleysen F, Cornacchia R and Engels T (2015) Altmetrics for the Humanities: Comparing Goodreads reader ratings with citations to history books. Publication date: Altmetrics for the humanities: Comparing Goodreads reader ratings with citations to history books. Aslib Proceedings; 67 (3): 320–336.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Competing interests
Competing interests: The author is an employee of Altmetric LLP and has been employed at Impactstory within the last 5 years.
Rights and permissions
This work is licensed under a Creative Commons Attribution 4.0 International License. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in the credit line; if the material is not included under the Creative Commons license, users will need to obtain permission from the license holder to reproduce the material. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/
About this article
Cite this article
Konkiel, S. Altmetrics: diversifying the understanding of influential scholarship. Palgrave Commun 2, 16057 (2016). https://doi.org/10.1057/palcomms.2016.57
Received:
Accepted:
Published:
DOI: https://doi.org/10.1057/palcomms.2016.57
This article is cited by
-
The public relevance of philosophy
Synthese (2022)
-
Introducing the ‘alt-index’ for measuring the social visibility of scientific research
Scientometrics (2020)
-
Analysis of highly tweeted dental journals and articles: a science mapping approach
British Dental Journal (2019)