Do researchers need extra incentives? The idea should seem strange to any young scientist — and a good many older ones too. Think of the theoretical physicist who will happily sit down at the weekend, not to read the newspapers but to play around with equations. Think of the cell biologist willing to put up with the burden of running and re-running painstaking experiments because it’s so difficult to make the set-up deliver — but whose instincts suggest that there is a nugget of insight at the end.

Now think of the head of an institute of, say, plant science. He or she will have no shortage of talented researchers wanting to understand fundamentals such as the precise mechanisms of influence of plant hormones at various stages of plant development. But what criteria will be used to assure the university or management board that the head has delivered? And what incentives will encourage the broadening of research in directions that scientific insight left to its own devices might not prioritize but that might, nevertheless, serve humanity well?

Young scientists and lab heads alike live in a world in which the social contract for science is changing all around them. This is happening in ways they can influence, if they are both lucky and astute, and from which they could and should benefit. Astute people, of course, often make their own good luck — finding themselves in the right place at the right time by being alert to the way the world is moving and engaging more broadly with interests around their disciplines than less adventurous academics might.

Challenging criteria

National governments — important drivers of the social contract — have the power to do more than steer the direction of science through broad funding priorities. They can, through their funding agencies, seek to ensure at least two other outcomes: an appropriate assessment and reward of research achievement, and an appropriate degree of trust in the robustness of that achievement.

For the former, there is plenty of action, but plenty of debate too. The head of the plant institute will know that it will be all too easy for assessors to focus excessively on the number of papers published in high-impact journals. (Nature, proud of its own high- and low-cited papers, has long challenged that inclination.) It is ever more important in researcher assessments to recognize the work that focuses on key societal challenges. Examples across the spectrum could include exploring how established techniques can enhance plant resistance to disease; work on climate-change adaptation; studies to enhance access to fresh water; and the development of psychological treatments for post-traumatic stress. Such work may at times be scientifically incremental, but has every bit as much claim on recognition.

Some major universities are recognizing the importance of such challenges by establishing their own programmes that may include the natural and social sciences and humanities (see page 7). For example, the National University of Singapore has a cluster of research groups working on the future of high-density cities; the University of Cambridge, UK, has several departments collaborating in public health; and Monash University in Melbourne, Australia, has established a programme on aspects of fresh water.

National governments have the power to do more than steer the direction of science.

Such collaborations are not easy to make effective, and it is therefore doubly important that ‘the system’ finds new ways to recognize and reward their outputs. In this respect, an interesting case to follow in 2014 is the United Kingdom’s Higher Education Funding Council for England (HEFCE) and its dependent institutions. Its national research assessments have evolved over the years in ways that other agencies around the world have examined — although few, if any, have imitated the extreme extent to which the outcomes directly influence subsequent funding. But this year, for the first time, we shall see what the HEFCE has made of the thousands of statements of research impacts submitted last year by universities.

Researchers have often expressed alarm at the perceived tendency in such exercises to move from one extreme focus of assessment — journal impact factors — to another, in particular the anticipation of contributions to economic growth. This year will show whether the HEFCE can demonstrate the appropriate degree of breadth, nuance and critical assessment of such statements, while giving due recognition to the socially valuable work advocated above.

And what about research robustness — in other words, the trust that taxpayers can have that the appropriate standards of technical integrity, aka professionalism, are being followed in laboratories? Journals, not least Nature, are recognizing that they and their referees have a part to play in ensuring better standards of quantitative analysis, and of data and protocol transparency. Universities have a key role too — in ensuring a greater degree of researcher training and of lab stewardship in the quality of outputs than is often happening. That is a tough challenge, because vice-chancellors have so little power over how their academics behave, and those academics are all too often engaged in a rat race for funds.

But 2014 should be a year in which funding agencies make clear their intentions in promoting rigorous lab standards, and there should be a concomitant pressure on universities and institutes to demonstrate quality assurance of lab practices and culture.

What is essential is that the motivation of young scientists to make a difference with their research is more broadly encouraged. They need strong mentoring and exemplars in doing a robust job and in contributing to the trust in their research community. Those who want to follow strong creative imaginations in discovering how the world works should be given full rein. So, too, should those more interested in using their creativity directly to make the world a better place. Those who have the luck to be able to do both and to be recognized for both will be in a sweet spot indeed.