When it comes to apportioning credit, science could learn from the movies. Since 1934, the Academy of Motion Picture Arts and Sciences, which awards the Oscars, has maintained an index to film credits, now called the Motion Picture Credits Database. For every film being considered for an award, the contribution of each person who worked on it, from hair stylists to the lead actress, is documented. Science, for all its focus on assessing and quantifying, has nothing like this.

The main currency in the world of science is authorship. Authorships enable scientists to accumulate citations, which seem to be established as the 'true' measure of successful and important science. Authorships are key to getting grants and winning promotions, and are also the foundation for a recently proposed algorithm for predicting scientists' success — their future h-index ( D. E. Acuna et al. Nature 489, 201–202; 2012).

Yet authorship is a dubious indicator. Sophisticated readers know that decoding an author list in terms of actual contribution to the scientific paper is close to impossible, unless you are involved yourself. Students find that they have to contribute much more to be listed as an author than more senior researchers do. Initially, students can feel exploited, but they may get used to the system. Eventually, if they choose an academic career, they may (more or less willingly) adopt the behaviour themselves. Political and personal relations also determine whether a contributor appears in the list of authors or under 'Acknowledgements'.

Some disciplines follow conventions in which list order reflects the relative importance or type of contribution. However, only experienced readers understand the conventions, which differ between disciplines and can be difficult to apply to papers with many authors.

For example, a six-page paper on the discovery of the Higgs boson at the Large Hadron Collider at CERN, Europe's particle-physics laboratory near Geneva, Switzerland, was followed by seven pages of author names, but readers without a deep understanding of the experiment's organization have no way of deducing the roles of the authors. They could not all have written the manuscript, and skilled people not listed probably also contributed to the experiments. Simplifying each person's contribution to a co-authorship results in an incomplete picture of their role in the work.

Decoding an author list in terms of actual contribution to the scientific paper is close to impossible.

Scientific publishers are aware of the problems of authorship, and many journals (such as Nature) publish descriptions — often in broad terms — of the contributions of each author. However, these accounts are mainly intended to keep author lists free of people who have made no real contribution to the work. They provide little detail on what each author did. And they leave out people whose activities — fund-raising or data acquisition, for example — are not enough to qualify for authorship, but are still pivotal.

A better system could soon emerge. Already, databases of scientific authors such as ResearcherID.com, BioMedExperts.com and ResearchGate.net allow authors to interact, form interest groups and rate each others' publications. ResearchGate has even introduced a new metric based on a scientist's contributions and activity in the network.

As a natural next step, these online networks could give the authors of a paper an opportunity to publish detailed accounts of each person's contribution. With a mouse click, each contributor could verify what others contributed, resulting in merit lists validated by all participants. A similar, but paper-based, system has existed for decades in Denmark, where co-author declarations document what work PhD students contributed to the papers included in their theses. The rigorous rules for authorship, requiring substantial contributions to the design of the work, data gathering and writing, would no longer be relevant, because each contribution could be described and valued. Everyone who made the science possible would be properly acknowledged, and the word 'author' could regain its true meaning.

In the highly competitive film industry, credit lists are the basis for the CVs of the diverse professionals needed to produce a blockbuster. A full contribution database would offer similar benefits to scientists. Contributors could generate updated, verified and detailed accounts of their scientific merits using a few clicks. Journals, funding agencies and employers would all come to rely on the system. Journals might replace the authors and acknowledgments sections in papers with online credit lists, funding agencies could use the system to generate validated CVs for the review process and employers could find the person with just the right skills and experience for the job.

The technical platform for such a database could emerge from the world of music, software development, film, journalism or any other creative industry. With the right technology, writing and approving entries in these networks could become everyday routine, just as 'liking' something on Facebook is today. Validated web-based information networks will allow documented skills — not just numbers of papers or citation-based indices — to count towards advancement in academic life. The transparency and distributed nature of such networks would create a strong incentive for honesty, and fundamentally improve the morale and culture of science.

Credit: A. MEIER