Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
The term proteome, coined in 1994 as a linguistic equivalent to the concept of genome, is used to describe the complete set of proteins that is expressed, and modified following expression, by the entire genome in the lifetime of a cell. It is also used in a less universal sense to describe the complement of proteins expressed by a cell at any one time.
Automated chip-based technologies for analysing thousands of proteins simultaneously, analogous to the cDNA chip-based technologies that have facilitated transcriptomics, could provide a leap forward for proteomics research, whose progress is limited by the cumbersome multi-step methods currently available.
One of the first funding agencies to recognize the potential of protein analysis was the US National Science Foundation. In 1989 it agreed to support the start of a ten-year programme at the University of Washington in Seattle to create a centre in molecular biotechnology, specializing in the development of proteomics tools.
The two key steps in classical proteomics are the separation of proteins in a sample derived from cells or tissues, and their subsequent identification. The best separation method is 2D gel electrophoresis, in which spots of a carefully prepared mixture of proteins extracted from cells or tissues are applied to a polyacrylamide gel.
Enormous amounts of data are being amassed in fields as diverse as genomics and astronomy. If this information is to be used effectively to speed the pace of discovery, scientists need new ways of working. This requires investment in computers, new statistical tools, and a liberal approach to data sharing.