Bayesian Multi-Plate High-Throughput Screening of Compounds

High-throughput screening of compounds (chemicals) is an essential part of drug discovery, involving thousands to millions of compounds, with the purpose of identifying candidate hits. Most statistical tools, including the industry standard B-score method, work on individual compound plates and do not exploit cross-plate correlation or statistical strength among plates. We present a new statistical framework for high-throughput screening of compounds based on Bayesian nonparametric modeling. The proposed approach is able to identify candidate hits from multiple plates simultaneously, sharing statistical strength among plates and providing more robust estimates of compound activity. It can flexibly accommodate arbitrary distributions of compound activities and is applicable to any plate geometry. The algorithm provides a principled statistical approach for hit identification and false discovery rate control. Experiments demonstrate significant improvements in hit identification sensitivity and specificity over the B-score and R-score methods, which are highly sensitive to threshold choice. These improvements are maintained at low hit rates. The framework is implemented as an efficient R extension package BHTSpack and is suitable for large scale data sets.


Introduction
High-throughput screening (HTS) of compounds is a critical step in drug discovery [7].This typically involves the screening of thousands to millions of candidate compounds (chemicals).The objective is to accurately identify which compounds are candidate active compounds (hits).Those compounds will then undergo a secondary screen.A flow chart of a typical HTS process is shown in Fig. 1.The first step in the process, called primary screening, is a comprehensive scan of tens of thousands of compounds with the objective of identifying primary hits.Most often the compounds are run in singlets, although a more desired experimental design will involve replicates.Computational and statistical tools involved in the primary screening step need to be accurate and also efficient, due to the large number of compounds to be screened.
Two types of error can occur in the primary screening process, namely false positive (FP) and false negative (FN) errors.While technological improvements and advances in experimental design and accuracy can help mitigate these two types of error, they by themselves are not able to sufficiently improve the quality of the HTS process in general and the primary screening step in particular.There is a need for comprehensive statistical and computational data analysis systems that can characterize HTS data accurately and efficiently, including available prior information and borrowing information.

HTS Data Structure
Compounds are evaluated on 96-well or 384-well plates.In the example shown in Fig. 2, four 96-well plates are used to tile a 384-well plate.For each 96-well plate, the first and last columns typically contain only control wells, and thus a 96-well plate only contains 80 test compounds.We assume that each well measures a different compound activity (e. g. no replicates) and has the same concentration of compound.It is also assumed that compounds are distributed randomly within the plate.The control wells are located in the first and last two columns of the 384-well plate.The 384-well plate design depicted in Fig. 2 inherently creates cross-plate correlation among the individual 96-well plates.This type of correlation is not accounted for by simple HTS systems working on individual 96-well plates.

Controls Controls compounds to screen Controls Controls
Fig. 3 provides a more detailed look of a 96-well plate.Ideally, controls should be placed randomly throughout the plate, to mitigate edge effects.However, the standard practice is to place the controls in the first and last columns and the compounds in inner columns.

Methods for High Throughput Screening of Compounds
High throughput screening statistical practice [2,7] has been focused on using simple methods such as the B-score, the Z-score and the normalized percent inhibition (NPI), for measuring compound activity and identifying potential candidate hits.These methods transform the compound raw value into the so called normalized value, which can then be used directly to assess compound activity.Each of the above mentioned methods has advantages and disadvantages and they differ in terms of how controls are used.The B-score and the Z-score do not use controls in the normalization process, while the NPI makes use of both positive and negative controls.
The Z-score and the NPI work on per individual compound basis.The NPI, which has a biologically plausible interpretation as the percent activity relative to an established positive control, is defined as where z is the compound raw value and z n and z p are the negative and positive control raw values respectively.The Z-score is defined as where µ z and σ z are the mean and standard deviation respectively of all compounds in the plate.The B-score works on a per plate basis in the sense that the plate geometry has an effect on the computed score.The B-score is defined as where r z is a matrix of residuals obtained after a median Polish fitting procedure and M AD z is the median absolute deviation.The NPI, Z-score and B-score all have significant limitations.The NPI is very sensitive to edge effects, since it uses the negative and positive control wells that are typically in the outer columns.
The Z-score is susceptible to outliers and assumes normally distributed compound readout values, an implausible assumption in many screening contexts (Fig. 4).Although the B-score takes into account systematic row and column plate effects and is the method of choice [7] in many cases, it requires an arbitrary threshold to identify hits and tends to miss important compounds with minimal or moderate activity.Critically, all these methods treat each plate independently.In some cases, systematic experimental and plate design effects may induce correlation among groups of plates.It is therefore desirable to have a system that works on multiple plates simultaneously.
A Bayesian approach for hit selection in RNAi screens was proposed in [17].The model imposes separate Gaussian priors on active, inactive and inhibition siRNAs.Inference is based on hypothesis testing via posterior distributions.The posterior distributions are then directly used to control false discovery rate (FDR) [9].The proposed method is parametric and although it may be reasonable in some cases to model the siRNAs as normally distributed, many data in practice and particularly HTS data are not Gaussian (as shown in Fig. 4).Additionally, the priors of the proposed method incorporate common information that is pooled from all plates.This type of information sharing is fixed and is different from the multi-plate sharing mechanism in machine learning, where different groups of data iteratively and selectively share information via a global layer [15].In this paper we develop a new system for HTS of compounds based on Bayesian statistics.The nonparametric method does not use controls and is capable of characterizing HTS data that are not necessarily Gaussian distributed.It can handle multiple plates simultaneously and is able to selectively share statistical strength among plates.This selective sharing mechanism, that is being updated at each sampler iteration, is important for discovering systematic experimental effects that propagate differently among plates.We develop an efficient Markov chain Monte Carlo (MCMC) sampler for estimating the compound readout posteriors.Based on posterior probabilities specifying if a compound is active or not, it is possible to determine probabilistic significance, control FDR

Statistical Model
Dirichlet process Gaussian mixtures (DPGM) [1,3] constitute a powerful class of models that can nonparametrically describe a wide range of distributions encountered in practice.The simplicity of DPGM and their ease of implementation have made them a preferable choice in many applications, as well as building blocks of more complex models and systems.In the DPGM framework, the Dirichlet process (DP) [4,12] models the mixing proportions of the Gaussian components.The hierarchical Dirichlet process (HDP) [15] is particularly suitable for modeling multi-task problems in machine learning.These problems seem to be very analogous to our multi-plate HTS of compounds scenario.
Our framework deploys two HDPs to characterize the active and inactive components.In the following sequel we approximate the DP via the finite stick-breaking representation [5].Let m ∈ {1, . . ., M } denote the plate index, where M is the total number of plates.Let i ∈ {1, . . ., n m } denote the compound well index within a plate, h ∈ {1, . . ., H} denote the DP mixture cluster index within a plate and k ∈ {1, . . ., K} denote the global DP component index.Let superscripts ( 1) and (0) refer to active and inactive compounds, respectively.Motivated in part by [6], we propose the following hierarchical fully Bayesian HTS (BHTS) model: m1 , . . ., λ 1 , . . ., λ (λ where K(•; θ) denotes a Gaussian kernel with parameters θ, and DP(•, •) denotes the finite stickbreaking representation of the DP (see supplementary materials for more details).
A graphical representation of the BHTS model is shown in Fig. 5.The blue circle represents the observed variable, white circles represent hidden (latent) variables and squares represent hyperparameters.Conditional dependence between variables is shown via the directed edges.
The compound data mean and variance can be used to specify the model hyperparameters, without the help from controls.A compound can be active (potential candidate hit) or inactive (exhibiting no activity).
Prior information about active and inactive compounds is reflected in the model hyperparameters µ 10 and µ 00 , respectively.These hyperparameters specify compound activity level estimates and can be specified using the mean of the compound data.For example, µ 10 and µ 00 can be set somewhat larger and smaller than the compound data mean, respectively.
The compound variability hyperparameters {a, b} are common for both active and inactive compounds, but the model facilitates different active σ 2 1 and inactive σ 2 0 compound variabilities.Let v denote the compound data variance, which can be used to specify {a, b}.Setting the inverse gamma variance to 10 −4 for example, and using v as its mean, the inverse gamma density parameters can be derived as: Active Component Inactive Component We derive MCMC update equations based on approximate full conditional posteriors of the model parameters and construct a Gibbs sampler that iteratively samples from these update equations.The stick breaking construction [5] is used in the approximate posterior distributions of the global and local DP weights.The update equations are shown in the supplementary materials.

False Discovery Rate and Multiplicity Correction
Our problem can be formulated as performing m n m dependent hypothesis tests of b mi = 0 versus b mi = 1.Following [8], an estimate to FDR for a given threshold r can be computed as: where π(z mi ) is the posterior probability estimate of the compound z mi being a hit (see supplementary) and 1(•) is the indicator function.Since the model is fully Bayesian, multiple comparison is automatically accounted for [11] in the estimated posteriors π(z mi ).

Test Data
We assess the performance of the BHTS method using a synthetically generated data set and compare it with the B-score method in terms of receiver operating characteristic (ROC) curves and area under the curve (AUC).The R extension package pROC [10] was employed in the analysis.
We also perform experiments with data sets containing real chemical compounds and controls with low, medium and high activity.These real data set experiments are used to assess the proposed method capability to identify hits with low and medium activity levels.A comparison with the industry standard B-score method is provided.

Synthetic Compound Data
We constructed synthetic data for the purpose of assessing sensitivity and specificity of the proposed algorithm.We generated a set of 80 × 10 3 compounds consisting of hits and non-hits.The hits were generated from a four component log-normal mixture model with means {0.20, 0.24, 0.28, 0.32} and variances {0.0020, 0.0022, 0.0024, 0.0026}.Similarly, the non-hits were generated from a four component log-normal distribution with means {0.10, 0.12, 0.14, 0.16} and variances {0.010, 0.011, 0.012, 0.013}.The compounds were then randomly distributed among 1000 compound plates, with each plate consisting of eight rows and ten columns.
We simulated plate effects by generating random noise from the matrix-normal distribution, with a zero location matrix and specific row and column scale matrices.The row and column scale matrices were designed in a way to reflect the structure of within plate row and column effects encountered in practice.Specifically, we used a real data set of compounds exhibiting the plate design shown in Fig. 2. We excluded all control well columns and computed B-scores based on individual 8 × 10 compound well plates.We then estimated the row-wise and column-wise covariance matrices of the difference between the compound raw values and their B-scores.The estimated covariance matrices were then properly scaled and used as row and column scale matrices (shown in Fig. 6) in generating the plate noise effects.An independently drawn noise plate was added to each of the compound plates.The resulting data plates were used as test data.
Experiments were performed with data sets containing different proportions of active and inactive compounds.Considering the fact that a large collection of compounds will probably contain a relatively small number of candidate compound hits of interest, we experimented with data sets containing 40%, 10%, and 5% of active compounds, respectively.In all synthetic data set experiments, the model hyperparameters were the same.See supplementary for specific hyperparameter values.

Comparison with B-score
Experimental ROC results are shown in Fig. 7.The B-score ROC curve is based on the maximum achievable AUC threshold shown in the same figure.It can be seen that the BHTS method improves upon the B-score method in terms of classification accuracy.The results in Fig. 7 also demonstrate that the B-score is highly sensitive to a particularly chosen optimal threshold, as evidenced by the spike in the AUC curve as a function of the threshold.

Hyperparameter Sensitivity Analysis
In this subsection we assess the sensitivity of the proposed method to the choice of hyperparameter values {µ 10 , µ 00 } by computing the AUC for a range of values of the difference (µ 10 − µ 00 ).Experimental results are shown in Fig. 8.It can be seen that the model performs similarly in terms of AUC, for a range of (µ 10 − µ 00 ) values and different data sets.

Real Compound Data
We assess the proposed method capability to identify potential hits with low and medium activity by performing experiments with 688 96-well plates of real must cell activated compounds and controls.Each plate contained 80 compound and 16 control wells.The data consisted of negative controls, low, medium and high concentration controls, positive controls, and compounds.A summary table of the data with the number of wells for each well type is shown in Tab. 1.The different control type densities are shown in Fig. 9.In this experiment, only control wells were used in assessing q q q q q q q q q q q q q q q q q q q q q q q q q 0.00 µ10 − µ00 AUC q q q q q q q q q q q q q q q q q q q q q q q q q 0.00 0.05 0.10 0.15 0.20 0.25 0.80 0.82 0.84 0.86 0.88 0.90 µ10 − µ00 AUC q q q q q q q q q q q q q q q q q q q q q q q q q 0.00  Experimental ROC results are shown in Fig. 9.The B-score ROC curve is based on the maximum achievable AUC threshold shown in the same figure.It can be seen that the BHTS method outperforms the B-score method in the classification of wells with low and medium concentration.This experimental comparison suggests that the BHTS approach would be more valuable in identifying compounds with low and medium activity.See supplementary for specifically chosen hyperparameters regarding this experiment.

Implementation and Scalability
We implemented the proposed model as an R package BHTSpack [14], with some of the inner routines implemented in C/C++.We experimented on a laptop with a Intel(R) Core(TM) i7-4600M CPU @ 2.90GHz and 8GB of RAM, running the 64-bit Ubuntu Linux operating system.It takes 30 minutes to complete 7 × 10 3 Gibbs sampler iterations, using 10 3 plates with 80 × 10 3 compounds.Additionally, it takes 23 hours to complete the same number of iterations, using 50 × 10 3 plates with 4 × 10 6 compounds.The proposed MCMC algorithm for Bayesian posterior computation had good mixing rates across cases in our experiments.

Conclusions
We developed a new probabilistic framework for primary hit screening of compounds based on Bayesian statistics.The statistical model is capable of simultaneously identifying hits from multiple plates, with possibly different numbers of unique compounds, and without the use of controls.It selectively shares statistical strength among different regions of plates, thus being able to characterize systematic experimental effects across plate groups.The nonparametric nature of the model makes it suitable for handling real compound data that are not necessarily Gaussian distributed.The probabilistic hit identification rules of the algorithm facilitate principled statistical hit identification and FDR control.Experimental validation with synthetic compound data show improved sensitivity and specificity over the B-score method, which is shown to be highly sensitive to the choice of an optimal threshold.In addition, experiments with real compound data containing negative and positive controls, as well as controls with low, medium and high concentration demonstrate significant improvement of classification accuracy over the B-score method, particularly in identifying hits with low and medium activity.An efficient implementation in the form of an R extension package BHTSpack [14] makes the method applicable to large scale HTS data analysis.

B MCMC Update Equations
This section presents the MCMC update equations based on approximate full conditional posteriors of the model parameters and construct a Gibbs sampler that iteratively samples from these update equations.

Let indicator variable I
(1) mi ∈ {1, . . ., K} specify the active component to which the ith compound from the mth plate belongs.Analogously, I (0) mi ∈ {1, . . ., K} specifies the inactive component to which the ith compound from the mth plate belongs.
• updating compound hit indicator variable b mi . where where • updating θ (1) where • updating θ where • updating λ 1 if z mi belongs to the hth active cluster, 0 if otherwise.
1 Subscript mi spans over plates and compounds (hence Jmi is a compound indicator), while subscript mh spans over plates and clusters (hence J mh is a cluster indicator).

C Choice of Hyperparameters
This section describes the choice of hyperparameters, for the synthetic and real control data set experiments.In all experiments, gamma density hyperparameters were fixed as a α 0 = a α 1 = a τ 0 = a τ 1 = 10 and b α 0 = b α 1 = b τ 0 = b τ 1 = 5.The active compound proportion prior hyperparameters were fixed as a π = b π = 0.5 m n m .The hyperparameters specifying the number of plate specific clusters and global components were fixed as H = 10 and K = 10, respectively.The hyperparameters a and b can be computed based on the variance of the data v, as described in the paper via equations ( 14) and (15).The hyperparameters µ 10 , µ 00 , {a, b} used in the experiments are summarized in Table 1

D Model Convergence and Mixing
To assess convergence and mixing of the Gibbs sampler we provide trace plots of some latent variables.We performed a total of 7000 iterations, discarding the first 3500 samples and using the remaining 3500 samples.We show trace plots of the global parameters µ 1k , µ 0k , σ 2 1k , σ 2 0k , and the global mixing components λ   D.

Figure 1 :
Figure 1: Block diagram of an HTS process.

Figure 3 :
Figure 3: Example of a 96-well plate with compounds in the middle 80 wells and controls in the first and last column wells.Left panel shows a plate containing compounds, negative and positive controls.Right panel shows a 96-well plate in which positive and negative controls alternate to reduce plate edge effects.

Figure 4 :
Figure 4: Density of mast cell activated compounds, exhibiting a log-normal law with a large positive outlier.Data are under the auspices of NIH contract No. HHSN272201400054C.

Figure 5 :
Figure 5: A graphical representation of BHTS model.The latent binary variable b mi specifies active (1) or inactive (0) compound.

Figure 6 :
Figure 6: Synthetic compound data used in the experiments.Left and middle plots show scale matrices used in generating the synthetic noise plates, indicating predominantly row dependent within plate effects.Right plot shows density of resulting synthetic compound and plate effect data.

Figure 7 :
Figure 7: Top row shows B-score method AUC plots as functions of thresholds.Bottom row shows ROC plots of the B-score and BHTS methods.Data sets containing 40% (left column), 10% (middle column) and 5% (right column) of active compounds, respectively.

Figure 9 :
Figure 9: Density of control raw values (left), B-score method AUC as a function of threshold (middle) and ROC (right) plots, based on the real control data.Data are under the auspices of NIH contract No. HHSN272201400054C.

kFigure 1 :
Figure 1: Trace plots of µ 1k .At each iteration, the means were sorted in increasing order to avoid label switching.

Figure 2 :Figure 3 :Figure 6 :
Figure 2: Trace plots of σ 2 1k.At each iteration, the variances were sorted in increasing order to avoid label switching.

Figure 10 :
Figure 10: Trace plots of σ 2 0k.At each iteration, the variances were sorted in increasing order to avoid label switching.

Table 1 :
performance of the proposed framework.The negative controls comprised the set of wells that are not hits, while the low, medium, high concentration, and positive controls comprised the set of hit wells.The rationale behind this scenario is that the data may contain compound hits with low, medium and high activity levels.Summary of real compound and control data used in the experiments. the Plate High Throughput Screening of Compounds: Supplementary Material Ivo D. Shterev * 1 , David B. Dunson 2 , Cliburn Chan 3 , and Gregory D. Sempowski 1 1 Duke Human Vaccine Institute, Duke University 2 Department of Statistical Science, Duke University 3 Department of Biostatistics and Bioinformatics, Duke University

Table 1 :
. Summary of data statistics and hyperparameters used in the experiments.
2 Synthetic Data Containing 10% Active Compounds Trace plots of µ 1k .At each iteration, the means were sorted in increasing order to avoid label switching.