My first experience of computing came in the early 1980s, when my brother bought a Sinclair ZX81, a primitive home computer that came equipped with 1 kilobyte of onboard memory and a flimsy touch-sensitive membrane keyboard that soon succumbed to our heavy-handed typing efforts. I found that by meticulously copying out a long series of commands in BASIC programming language, I could create a very crude approximation of the popular arcade game Snake on our television screen. Even better, the program could be stored on a cassette tape and reloaded—a process that took only slightly longer than typing the whole thing out again, given that the machine would frequently crash under the sheer weight of information.

...human input remains essential to harness the benefit of these powerful tools...

30 years later, my typing technique has, sadly, seen little in the way of improvement. Computer technology, by contrast, has advanced beyond all recognition. Thanks to user-friendly interfaces such as the Mac and Windows operating systems, most computer users no longer have to concern themselves with the minutiae of software programming, and search engines such as Google provide instant access to information on an unprecedented scale. Nevertheless, advances in research areas such as brain imaging and genomics have led to the generation of massive data sets that push the current technology to its limits, and computational infrastructures must constantly evolve to keep pace.

The field of neuroinformatics was originally conceived as the application of information technology to the compilation, integration and analysis of large data sets in basic neuroscience research. In this special issue of Nature Reviews Neurology, experts in three diverse areas—brain imaging in neurodegenerative disease, glioma genomics, and neurointensive care—explore how the principles of neuroinformatics are now being extended to research and clinical practice in neurology.

Large-scale initiatives such as the Alzheimer's Disease Neuroimaging Database (ADNI) and the NIH Pediatric Database are giving researchers—regardless of their geographical location—open access to clinical and imaging data from a vast range of sources. As well as providing a repository for such data, sophisticated 'e-infrastructures' such as the Laboratory of Neuro Imaging, CBRAIN and neuGRID offer an array of tools for visualization, image processing and statistical analysis. According to a Review by Giovanni Frisoni and colleagues in this focus issue, the ultimate aim of these initiatives is to create 'global virtual imaging laboratories' for the identification of imaging biomarkers in neurodegenerative disease.

As Gregory Riddick and Howard Fine discuss in another Review article, neuroinformatics approaches are also being applied to the study of brain tumor biology at genetic and molecular levels. Genomic data on glioma are being compiled in the Repository for Molecular Brain Neoplasia Data—the bioinformatics component of the Glioma Molecular Diagnostic Initiative (GMDI)—and The Cancer Genome Atlas, both of which also provide researchers with computational tools for data analysis. The hope is that such information might eventually be used to tailor treatment regimens to individual patients on the basis of the molecular signature of their tumor.

The findings of initiatives such as ADNI and GMDI have yet to be translated into routine clinical practice, but a Review by Claude Hemphill and colleagues provides an intriguing glimpse of how neuroinformatics might, in the near future, be directly applied at the bedside of patients with brain injury. The new field of neurocritical care bioinformatics entails multimodal monitoring of parameters such as brain oxygenation, cerebral blood flow and cerebral metabolism that, combined with bioinformatic analysis of the data, enables the clinician to build up a complete picture of the overall physiological status of the patient. The information gleaned from this approach should contribute to the prevention and management of secondary brain injury—a major cause of morbidity and mortality after brain insults.

In their article, Hemphill et al. present a scenario that raises an important issue. They point out that when traditional paper charts are used for patient monitoring, obvious artifacts in the data—caused, for example, by moving a piece of monitoring equipment or opening a tap to drain cerebrospinal fluid—are usually 'cleaned' from the patient record by nursing staff. By contrast, an automated data-acquisition system might fail to recognize these values as artifactual, thereby providing a misleading impression of the patient's condition.

As computational methodology becomes incorporated into an increasing number of scientific and clinical disciplines, we must be careful not to overlook the importance of human involvement in data analysis. If investigators are to take full advantage of the enormous possibilities opened up by large-scale neuroinformatics initiatives, they must also be aware of the potential limitations of the systems, and ensure that they are asking the right research questions in the first place. Despite enormous advances in computing and informatics from rudimentary systems just a few decades ago, the role of active human input remains essential to harness the benefit of these powerful tools at the bench and the bedside.