What a difference a word makes. For all new grant applications from 14 January, the US National Science Foundation (NSF) asks a principal investigator to list his or her research “products” rather than “publications” in the biographical sketch section. This means that, according to the NSF, a scientist's worth is not dependent solely on publications. Data sets, software and other non-traditional research products will count too.

There are more diverse research products now than ever before. Scientists are developing and releasing better tools to document their workflow, check each other's work and share information, from data repositories to post-publication discussion systems. As it gets easier to publish a wide variety of material online, it should also become easy to recognize the breadth of a scientist's intellectual contributions.

But one must evaluate whether each product has made an impact on its field — from a data set on beetle growth, for instance, to the solution to a colleague's research problem posted on a question-and-answer website. So scientists are developing and assessing alternative metrics, or 'altmetrics' — new ways to measure engagement with research output.

The NSF policy change comes at a time when around 1 in 40 scholars is active on Twitter1, more than 2 million researchers use the online reference-sharing tool Mendeley (see go.nature.com/x63cwe), and more than 25,000 blog entries have been written about peer-reviewed research papers and indexed on the Research Blogging platform2.

In the next five years, I believe that it will become routine to track — and to value — citations to an online lab notebook, contributions to a software library, bookmarks to data sets from content-sharing sites such as Pinterest and Delicious. In other words, to value a wider range of metrics that suggest a research product has made a difference. For example, my colleagues and I have estimated that the data sets added to the US National Center for Biotechnology Information's Gene Expression Omnibus in 2007 have contributed to more than 1,000 papers3,4. Such attributions continue to accumulate for several years after data sets are first made publicly available.

In the long run, the NSF policy change will do much more than just reward an investigator who has authored a popular statistics package, for instance. It will change the game, because it will alter how scientists assess research impact.

The new NSF policy states: “Acceptable products must be citable and accessible including but not limited to publications, data sets, software, patents, and copyrights.” By contrast, previous policies allowed only “patents, copyrights and software systems” in addition to research publications in the biography section of a proposal, and considered their inclusion to be a substitute for the main task of listing research papers.

Altmetrics give a fuller picture of how research products have influenced conversation, thought and behaviour.

Still, the status quo is largely unchanged. Some types of NSF grant-renewal applications continue to request papers alone. Indeed, several funders — including the US National Institutes of Health, the Howard Hughes Medical Institute and the UK Medical Research Council — still explicitly ask for a list of research papers rather than products.

Even when applicants are allowed to include alternative products in grant applications, how will reviewers know if they should be impressed? They might have a little bit of time to watch a short video on YouTube demonstrating a wet-lab technique, or to read a Google Plus post describing a computational algorithm. But what if the technique takes more time to review, or is in an area that is outside the reviewer's expertise? Existing evaluation mechanisms often fail for alternative products — a YouTube video, for example, has no journal title to use as a proxy for anticipated impact. But it will definitely receive a number of downloads, some 'likes' on Facebook, a few Pinterest bookmarks and discussion in blogs.

Tracking trends

Many altmetrics have already been gathered for a range of research products. For example, the data repositories Dryad and figshare track download statistics (figshare is supported by Digital Science, which is owned by the same parent company as Nature). Some repositories, such as the Inter-university Consortium for Political and Social Research, provide anonymous demographic breakdowns of usage.

Specific tools have been built to aggregate altmetrics across a wide variety of content. Altmetric.com (also supported by Digital Science) reveals the impact of anything with a digital object identifier (DOI) or other standard identifier. It can find mentions of a data set in blog posts, tweets and mainstream media (see go.nature.com/yche8g). The non-profit organization ImpactStory (http://impactstory.org), of which I am a co-founder, tracks the impact of articles, data sets, software, blog posts, posters and lab websites by monitoring citations, blogs, tweets, download statistics and attributions in research articles, such as mentions within methods and acknowledgements5. For example, a data set on an outbreak of Escherichia coli has received 43 'stars' in the GitHub software repository, 18 tweets and two mentions in peer-reviewed articles (see go.nature.com/dnhdgh).

Such altmetrics give a fuller picture of how research products have influenced conversation, thought and behaviour. Tracking them is likely to motivate more people to release alternative products — scientists say that the most important condition for sharing their data is ensuring that they receive proper credit for it6.

The shift to valuing broad research impact will be more rapid and smooth if more funders and institutions explicitly welcome evidence of impact. Scientists can speed the shift by publishing diverse research products in their natural form, rather than shoehorning everything into an article format, and by tracking and reporting their products' impact. When we, as scientists, build and use tools and infrastructure that support open dissemination of actionable, accessible and auditable metrics, we will be on our way to a more useful and nimble scholarly communication system.