Microarrays have set the stage for an explosion of large-scale expression data, driven by a diversity of genome sequencing projects. The technology has already demonstrated its applications in analysis of model systems, such as the response of mammalian fibroblasts to serum and sporulation in yeast. The comparison of data between multiple experiments run as a time series or under different conditions is not a trivial task. Although the analysis is challenging, it has the potential to answer some of the most interesting questions regarding information mining on gene expression patterns or function. To address these questions we have investigated standardization methods over multiple expression analysis experiments covering systems from high-density microarrays (40,000 individual gene transcripts) to membrane applications (500 individual gene transcripts). By making the assumption that global changes in gene activity are negligible, we show that normalization over the entire set of gene expression values in a given profile (provided that profile is not biased for examination of a particular system) provides a more statistically robust method than using housekeeping gene expression values. We also show that there is no significant reason for normalizing with a reduced subset of genes over a given range of expression. We have compared expression data derived from two different technological systems (glass slide and filter based); both of these systems have an intra-experimental distribution close to log-normal. We therefore normalize by mapping logged expression values within each experiment to a standard distribution with zero mean and unit variance. This transformation can be seen to effectively reduce to a minimum intra- and extra-experimental variances when analysing replicate experiment data. These methods are currently being applied in the statistical analysis of differential expression among patient groups and in the analysis of model organisms subject to certain conditions.