Credit: © ISTOCKPHOTO / SUBIC

Defining and measuring success in some professions — especially those that are based on rankings of some sort — can be a relatively uncomplicated task. In sport the equation is simple: the team or individual with the most trophies or medals is the most successful — few remember those who finish second. In music, the assessment is perhaps not quite as clear-cut, but isn't difficult to make — although it's not always the case, critical success (awards and community recognition) and commercial success (sales of singles, albums and concert tickets) often go hand-in-hand. Science may be a long way from either sport or music, but it too has its own leagues, leaderboards, charts and awards — but how useful are they for helping us to determine scientific success?

Science may be a long way from either sport or music, but it too has its own leagues, leaderboards, charts and awards.

The basic currency of scientific communication is the journal article, and so it seems sensible to use this as a starting point for evaluating success in a given area. At first glance, this is a particularly attractive approach because we can boil down an individual's publication record to cold hard numbers. For example, we can count how many papers someone has to their name and we can also count the number of times a specific article has been cited — or indeed how much an individual's complete body of work has been cited. Moreover, the rise of the internet has made finding these numbers a fairly trivial task. But can we make meaningful comparisons?

Take two scientists who have been working in the same field for similar lengths of time; is researcher A, with 500 papers, more successful than researcher B, who has 50 papers? Can we make such a decision based on just these figures, especially when this bare-bones analysis takes place without considering the journals in which these papers were published and what their perceived reputation is — most commonly judged, rightly or wrongly, by their 'impact factors'. Futhermore, these numbers do not tell us anything about how many times these articles have been cited by other scientists — who we presume will have diligently read the papers and used the information contained within them to help their own research. So, now imagine that researcher A's 500 papers have been cited a total of 5,000 times, but that researcher B's smaller output of 50 papers have been cited 10,000 times. It could be argued, based on this set of numbers, that researcher B's body of work, although smaller, has had a greater impact on other scientists, and so is more successful.

There are various metrics used to try and quantify the scientific productivity and impact of a researcher based on their publication and citation records, the most well-known being the h-index proposed by Jorge E. Hirsch. But with any of these measures, some caution must be exercised in reducing large bodies of work produced over many years to just a few numbers and then simply looking at which is the larger (or smaller). Although it offers a starting point when comparing how successful scientists are, it should be no more than just that — a starting point.

The journal article often represents the end product of a particular line of research, but a lot of resources are needed to get to that point — typically laboratory equipment and, most importantly, students and postdocs. For this, you need money, and the more of it you have, the more students and postdocs you can fund and the better equipment you can buy. Therefore, perhaps another measure of success is the amount of funding that a researcher brings into a department. Along with publication record, teaching and departmental service, this is a crucial part of tenure decisions in the US academic system. It's hard to do research — and therefore be successful — with little or no money. That being said, it's all well and good bringing in a lot of money, but surely it's what you do with it that counts.

Perhaps one of the most obvious measures of individual success in science is based on the recognition of your peers. This is typically formalized either by the giving of prizes and awards, or the admittance to a select organization such as a national academy or similar institution. This type of accolade is not always without controversy, however, especially when the criteria by which selections are made are not always transparent. But such decisions will always be subjective because they are not based on a cold hard metric — because there isn't one! Occasionally, there is also the suspicion that other forces — such as political ones — may be involved.

There are certainly other measures of success, but ones that are harder to quantify. Perhaps one of the most important of these, however, is education — the foundation on which scientific progress is made. Each new generation of scientists must be trained by the previous one, and the contributions an individual makes in preparing students to strike out into the academic world and forge their own careers are vital ones. If it wasn't for our scientific 'parents' and 'grandparents' showing us the way, the road would probably be a much harder one to travel.

How successful the individuals are that devote their lives to science is hard to measure, and such judgements are often part of a bureaucratic exercise for dividing up financial resources or awarding medals and scrolls. The success of science itself as a field of human endeavour is unquestionable, and surely that's the most important point.