Peak learning of mass spectrometry imaging data using artificial neural networks

Mass spectrometry imaging (MSI) is an emerging technology that holds potential for improving, biomarker discovery, metabolomics research, pharmaceutical applications and clinical diagnosis. Despite many solutions being developed, the large data size and high dimensional nature of MSI, especially 3D datasets, still pose computational and memory complexities that hinder accurate identification of biologically relevant molecular patterns. Moreover, the subjectivity in the selection of parameters for conventional pre-processing approaches can lead to bias. Therefore, we assess if a probabilistic generative model based on a fully connected variational autoencoder can be used for unsupervised analysis and peak learning of MSI data to uncover hidden structures. The resulting msiPL method learns and visualizes the underlying non-linear spectral manifold, revealing biologically relevant clusters of tissue anatomy in a mouse kidney and tumor heterogeneity in human prostatectomy tissue, colorectal carcinoma, and glioblastoma mouse model, with identification of underlying m/z peaks. The method is applied for the analysis of MSI datasets ranging from 3.3 to 78.9 GB, without prior pre-processing and peak picking, and acquired using different mass spectrometers at different centers.


Statistics
For all statistical analyses, confirm that the following items are present in the figure legend, table legend, main text, or Methods section.
n/a Confirmed The exact sample size (n) for each experimental group/condition, given as a discrete number and unit of measurement A statement on whether measurements were taken from distinct samples or whether the same sample was measured repeatedly The statistical test(s) used AND whether they are one-or two-sided Only common tests should be described solely by name; describe more complex techniques in the Methods section.
A description of all covariates tested A description of any assumptions or corrections, such as tests of normality and adjustment for multiple comparisons A full description of the statistical parameters including central tendency (e.g. means) or other basic estimates (e.g. regression coefficient) AND variation (e.g. standard deviation) or associated estimates of uncertainty (e.g. confidence intervals) For null hypothesis testing, the test statistic (e.g. F, t, r) with confidence intervals, effect sizes, degrees of freedom and P value noted

Data analysis
The machine learning model was implemented using open source platforms of Python(3.6.4) with keras(2.1.5-tf) and tensorflow(version 1.8.0), numpy(1.14.2), sklearn(0.19.1), scipy(1.0.0), matplotlib(3.0.2), Kneed (0.6.0), and h5py(2.7.1). Data analysis was performed on our PC workstation (Intel Xenon 3.3GHz, 512 GB RAM, 64-bit Windows, 2 GPUs NVIDIA TITAN Xp). Peak picking was performed using SCiLS Lab (2020a, Bruker Daltonics, Germany). The source code is available on GitHyb via this link: https://github.com/wabdelmoula/msiPL.git For manuscripts utilizing custom algorithms or software that are central to the research but not yet described in published literature, software must be made available to editors and reviewers. We strongly encourage code deposition in a community repository (e.g. GitHub). See the Nature Research guidelines for submitting code & software for further information.

Data
Policy information about availability of data All manuscripts must include a data availability statement. This statement should provide the following information, where applicable: -Accession codes, unique identifiers, or web links for publicly available datasets -A list of figures that have associated raw data -A description of any restrictions on data availability Results of Figures 5-7  Field-specific reporting Please select the one below that is the best fit for your research. If you are not sure, read the appropriate sections before making your selection.

Life sciences Behavioural & social sciences Ecological, evolutionary & environmental sciences
For a reference copy of the document with all sections, see nature.com/documents/nr-reporting-summary-flat.pdf

Life sciences study design
All studies must disclose on these points even when the disclosure is negative.

Sample size
The computational model was trained on thousands of high dimensional datapoints (i.e. spectra) sampled from an entire 2D tissue surface, and the sample size of the MSI data size is as following: 2D prostate data (12,716 spectra and 730,403 m/z bins), 3D PDX data (training: 3,570 spectra; testing: 11,263 spectra; and 661,402 m/z bins), 3D mouse kidney data(training: 18,536 spectra;, testing: 1,342,294 spectra; ; 7671 m/z bins), 3D Colorectal adenocarcinoma data(training: 5,694 spectra; testing: 142,350; spectra; 8,073 m/z bins), and 3D oral squamous cell carcinoma (training: 12,875 spectra; testing: 815,683 spectra; 7,6665 m/z bins). These sample sizes were sufficient to show learning stability and model convergence that resulted in comparable performances in manifold learning and minimizing the reconstruction error in both training and testing data (e.g. see distribution of model convergence in Figures 2.a and 3.a as well as in supplementary figures Figure S2.a, S4.a, S7.a).
Data exclusions No data were excluded.

Replication
The robustness of the computational model was tested on MSI data acquired at different centers from different biological systems and acquired using different mass spectrometers equipped with different ionization methods and different mass analyzers. The model stability was also tested using cross-validation analysis as explained in the main text.
Randomization The model was optimized on a randomly shuffled spectral batches of size 128 spectra/batch. In case of 3D MSI data, the model was trained on a set of spectra acquired from a full 2D tissue section (first section in the 3D stack was arbitrary chosen) and the test was done on the withheld 3D MSI data.

Blinding
Not relevant to the machine learning. However, our neural network model is well regularized (batch normalization and kullback-Leibler divergence) to stabilize the learning performance and avoid overfitting and this was assessed on the performance of a new unseen data.
Reporting for specific materials, systems and methods We require information from authors about some types of materials, experimental systems and methods used in many studies. Here, indicate whether each material, system or method listed is relevant to your study. If you are not sure if a list item applies to your research, read the appropriate section before selecting a response.