Biologically informed deep neural network for prostate cancer discovery

The determination of molecular features that mediate clinically aggressive phenotypes in prostate cancer remains a major biological and clinical challenge1,2. Recent advances in interpretability of machine learning models as applied to biomedical problems may enable discovery and prediction in clinical cancer genomics3–5. Here we developed P-NET—a biologically informed deep learning model—to stratify patients with prostate cancer by treatment-resistance state and evaluate molecular drivers of treatment resistance for therapeutic targeting through complete model interpretability. We demonstrate that P-NET can predict cancer state using molecular data with a performance that is superior to other modelling approaches. Moreover, the biological interpretability within P-NET revealed established and novel molecularly altered candidates, such as MDM4 and FGFR1, which were implicated in predicting advanced disease and validated in vitro. Broadly, biologically informed fully interpretable neural networks enable preclinical discovery and clinical prediction in prostate cancer and may have general applicability across cancer types.


Statistics
For all statistical analyses, confirm that the following items are present in the figure legend, table legend, main text, or Methods section.
n/a Confirmed The exact sample size (n) for each experimental group/condition, given as a discrete number and unit of measurement A statement on whether measurements were taken from distinct samples or whether the same sample was measured repeatedly The statistical test(s) used AND whether they are one-or two-sided Only common tests should be described solely by name; describe more complex techniques in the Methods section.
A description of all covariates tested A description of any assumptions or corrections, such as tests of normality and adjustment for multiple comparisons A full description of the statistical parameters including central tendency (e.g. means) or other basic estimates (e.g. regression coefficient) AND variation (e.g. standard deviation) or associated estimates of uncertainty (e.g. confidence intervals) For null hypothesis testing, the test statistic (e.g. F, t, r) with confidence intervals, effect sizes, degrees of freedom and P value noted

Software and code
Policy information about availability of computer code Data collection Data used in the study is available in the public domain. No special software was used to collect the data.

Data analysis
A custom code was developed as part of the analysis reported here. The full code is deposited on the code sharing site GitHub and the link is provided in the Methods section of the submitted paper (https://github.com/marakeby/pnet_prostate_paper) The library names and versions used in the implementation are provided in https://github.com/marakeby/pnet_prostate_paper/blob/master/ environment.yml ImageStudioLite was used for quantification of MDM4 depletion GraphPad Prism 9.1.2 was used to determine IC50 values For manuscripts utilizing custom algorithms or software that are central to the research but not yet described in published literature, software must be made available to editors and reviewers. We strongly encourage code deposition in a community repository (e.g. GitHub). See the Nature Research guidelines for submitting code & software for further information.

Data
Policy information about availability of data All manuscripts must include a data availability statement. This statement should provide the following information, where applicable: Field-specific reporting Please select the one below that is the best fit for your research. If you are not sure, read the appropriate sections before making your selection.

Life sciences Behavioural & social sciences Ecological, evolutionary & environmental sciences
For a reference copy of the document with all sections, see nature.com/documents/nr-reporting-summary-flat.pdf

Life sciences study design
All studies must disclose on these points even when the disclosure is negative.

Sample size
Typical sample size and power calculations do not apply to non-linear machine learning models. Machine learning methodologies generally improve as sample sizes increase, which makes prospective power analyses in these contexts difficult to interpret given the nonlinearity of the underlying mathematical framework relative to the parametric approaches leveraged for power calculations. We explicitly studied the effect of the training sample size on the computational performance of the developed model in predicting clinical outcomes in unseen dataset and compared this to other machine learning models as well.

Replication
All the analyses reported here are tested for reproducibility. Best practices of machine learning development are followed. Random seed is set for all experiments. Source code for reproducing the results are deposited on GitHub. Machine learning training and testing process is repeated 5 times in a randomized 5 fold cross-validation setup. Knock down experiments are repeated 3 times with 3 replicates in each experiment. Drug treatment experiments are repeated 3 times.
Randomization Best practices for randomizing samples for machine learning model development were followed. Samples were randomly assigned to training, testing, and validation groups. The performance metrics of all machine learning models are reported and compared for testing group. The experiments are repeated in a randomized 5 cross-validation setup and the metrics are compared as well.

Blinding
Investigators were not blinded. Blinding during data collection was not needed because the data is collected from the public domain.