In June, Nature Neuroscience introduced a new web-based manuscript tracking system that allows authors to submit their manuscripts online. This system, which will be adopted by all the Nature journals over the next few months, will replace our previous database and should lead to substantial improvements to the review process.

Under the new system, when authors submit a paper or when referees agree to review, they receive emails with encrypted links, allowing them to log into the system and view the relevant manuscripts. Once inside the system, they can set their own passwords and can return to the site (http://www.nature.com/neuro/esubmission/index.html) at any time via a link from the Nature Neuroscience homepage. Manuscripts can be submitted in a variety of text and graphic formats, all of which are converted into a single PDF. To ensure that files have been converted correctly, authors can view their PDFs before approving submission. Referees can view either the PDF or the original files, and can upload their comments for authors and editors to the same site, where they will be automatically appended to the manuscript record.

We expect the new system to bring many benefits. Most obviously, it will eliminate delivery time and mailroom delays, as well as the cost and inconvenience of printing and shipping multiple copies of each manuscript. Authors will be able to check the status of their manuscripts as often as they wish, and referees will be able to access manuscripts from any computer with a web connection.

Behind the submission and review modules lies an internal tracking system, which we hope will increase the efficiency of our editorial procedures. It generates task lists for each editor, allowing us to prioritize daily workloads—not a trivial consideration for a journal receiving 150 or more submissions each month. The system also assists editors in identifying related manuscripts and appropriate referees. We hope and expect that these changes will translate into better service for our authors and referees, and we welcome any comments on how the system can be improved.

Many journals are now introducing electronic tracking systems, presenting many new opportunities for data mining. Some of the questions will have immediate implications for editorial practice. For example, certain referees have reputations (at least among editors) for being fast or slow, stringent or lenient, and editors will now be able to verify these subjective impressions with objective data. Speed of response is important, of course, and we tend to avoid using referees whom we have found to be chronically slow in returning their reviews. It may also be useful to see 'voting records'; editors often give more weight to a positive recommendation if it comes from a referee who normally tends to be negative (or vice versa), and this should be more reliable with the ability to examine a referee's previous history. For new and untested referees, our normal policy is to 'calibrate' them against experienced referees wherever possible; the new database should allow us to do this more systematically, perhaps even identifying referees whose recommendations tend to be correlated (positively or negatively!) with others in the same field.

In the longer term, it may be possible to use retrospective analysis to compare our own editorial decisions with later citation statistics. To the extent that the number of citations to a given paper reflects its importance to the field, it would be interesting to know, for instance, to what extent each decision (the initial screening to determine which papers are formally reviewed, the final decision after external review) predicts subsequent citations. It might even be possible to identify referees (or indeed editors) with a track record of predicting 'winners'—in other words, people who tended to recommend acceptance of papers that subsequently turn out to be highly cited, and rejection of those that do not.

Collecting such data will facilitate scientific evaluation of the peer review system itself, a subject that is still in its infancy. Publication in peer-reviewed journals is of course central to the modern scientific process, but although most researchers have strong opinions on the subject, there is surprisingly little quantitative information about how the review process works. This is now changing, however, and in recent years, peer review has itself become the subject of an emerging scientific literature1. As quantitative analysis of their performance becomes increasingly feasible, journals can expect to be held more accountable for the service they provide to the community. We welcome this trend, and would be pleased to receive any suggestions for ways in which our own new database might be harnessed for the cause of editorial self-improvement.