Generalized Empirical Bayes Modeling via Frequentist Goodness of Fit

The two key issues of modern Bayesian statistics are: (i) establishing principled approach for distilling statistical prior that is consistent with the given data from an initial believable scientific prior; and (ii) development of a consolidated Bayes-frequentist data analysis workflow that is more effective than either of the two separately. In this paper, we propose the idea of “Bayes via goodness-of-fit” as a framework for exploring these fundamental questions, in a way that is general enough to embrace almost all of the familiar probability models. Several examples, spanning application areas such as clinical trials, metrology, insurance, medicine, and ecology show the unique benefit of this new point of view as a practical data science tool.

Bayesians and frequentists have long been ambivalent toward each other [1][2][3] . The concept of "prior" remains the center of this 250 years old tug-of-war: frequentists view prior as a weakness that can hamper scientific objectivity and can corrupt the final statistical inference, whereas Bayesians view it as a strength to incorporate relevant domain-knowledge into the data analysis. The question naturally arises: how can we develop a consolidated Bayes-frequentist data analysis workflow 4-7 that enjoys the best of both worlds? The objective of this paper is to develop one such modeling framework.
We observe samples y = (y 1 , …, y k ) from a known probability distribution f(y|θ), where the unobserved parameters θ = (θ 1 , …, θ k ) are independent realizations from unknown π(θ). Given such a model, Bayesian inference typically aims at answering the following two questions: • MacroInference: How should we combine k model parameters to come up with an overall, macro-level aggregated statistical behavior of θ 1 , …, θ k ? • MicroInference: Given the observables y i , how should we simultaneously estimate individual micro-level parameters θ i ?
Thanks to Bayes' rule, answers to these questions are fairly straightforward and automatic once we have the observed data = y { } i i k 1 and a specific choice for π(θ). A common practice is to choose π as the parametric conjugate prior g(θ; α, β), where the hyper-parameters are either selected based on an investigator's expert input or estimated from the data (current/historical) when little prior information is available.

Motivating Questions
However, an applied Bayesian statistician may find it unsatisfactory to work with an initial believable prior g(θ) at its face value, without being able to interrogate its credibility in the light of the observed data 8,9 as this choice unavoidably shapes his or her final inferences and decisions. A good statistical practice thus demands greater transparency to address this trust-deficit. What is needed is a justifiable class of prior distributions to answer the following pre-inferential modeling questions: Why should I believe your prior? How to check its appropriateness (self-diagnosis)? How to quantify and characterize the uncertainty of the a priori selected g? Can we use that information to "refine" the starting prior (auto-correction), which is to be used for subsequent inference? In the end, the question remains: how can we develop a systematic and principled approach to go from a scientific prior to a statistical prior that is consistent with the current data? A resolution of these questions is necessary to develop a "dependable and defensible" Bayesian data analysis workflow, which is the goal of the "Bayes via goodness-of-fit" technology.  , we can approximate it by projecting into the orthonormal basis {T j } satisfying ∫ θ θ δ = T G T G G ( ; ) ( ; )d i j ij . We choose T j (θ; G) to be G Leg ( ) j  θ , a member of the LP-class of rank-polynomials 11 . The system {T j } possesses two attractive properties: they are polynomials of rank transform G(θ) thus constitutes a robust basis, and they are orthonormal with respect to  G ( ) 2 , for any arbitrary G (continuous). This is not to be confused with standard Legendre polynomials Leg j (u), 0 < u < 1, which are orthonormal with respect to Uniform[0, 1] measure. For more details, see Supplementary Appendix B. The above discussion paves the way for the following definition. The LP-Fourier coefficients LP[j; G, Π] are the key parameters that help us to express mathematically the "gap" between a priori anticipated G and the true prior Π. When all the expansion coefficients are zero, we automatically recover g.
We will now spend a few words on the LP-DS(G, m) class of prior models: • When π(θ) is a member of DS(G, m) class of priors, the orthogonal LP-transform coefficients (1.2) satisfy Thus, given a random sample θ 1 , …, θ k from π(θ), we could easily estimate the unknown LP-coefficients, and, thus, d and π, by computing the sample mean k T G ( ; ) But unfortunately, the θ i 's are unobserved. Section 2 describes an estimation strategy that can deal with the situation at hand. Before introducing this technique, however, we must acclimate the reader with the role played by the U-function d(u; G, Π) for uncertainty quantification and characterization of the initial believable prior g. That's the objective of the next Section 1.2.
• Under definition 2, we have DS(G, m = 0) ≡ g(θ; α, β). The truncation point m in (1.2) reflects the concentration of permissible π around a known g. While this class of priors is rich enough to approximate any reasonable prior with the desired accuracy in the large-m limit, one can easily exclude absurdly rough densities and focus on a neighborhood around the domain-knowledge-based g by choosing m not "too big. " • The motivations behind the name 'DS-Prior' are twofold. First, our formulation operationalizes I. J. Good's 'Successive Deepening' idea 12 for Bayesian data analysis: A hypothesis is formulated, and, if it explains enough, it is judged to be probably approximately correct. The next stage is to try to improve it. The form that this approach often takes in EDA is to examine residuals for patterns, or to treat them as if they were original data (I. J. Good, 1983, p. 289).
Secondly, our prior has two components: A Scientific g that encodes an expert's knowledge and a Data-driven d. That is to say that our framework embraces data and science, both, in a testable manner 13 .
Exploratory Diagnostics and U-Function. Is your data compatible with the pre-selected g(θ)? If yes, the job is done without getting into the arduous business of nonparametric estimation. If no, we can model the "gap" between the parametric g and the true unknown prior π, which is often far easier than modeling π from scratch (hence, one can learn from small number of cases)! If the observed y 1 , …, y k look very unexpected given g(θ; α, β), it is completely reasonable to question the sanctity of such a self-selected prior. Here we provide a formal nonparametric exploratory procedure to describe comprehensively the uncertainty about the choice of g. Using the algorithm detailed in the next section, we estimate U-functions for four real data sets. Among them, the first three are binomial variate and the last one normal. The results are shown in Fig. 1.
• The rat tumor data 14 consists of observations of endometrial stromal polyp incidence in k = 70 groups of female rats. For each group, y i is the number of rats with polyps and n i is the total number of rats in the experiment. • The terbinafine data 15 comprise k = 41 studies, which investigate the proportion of patients whose treatment terminated early due to some adverse effect of an oral anti-fungal agent: y i is the number of terminated treatments and n i is the total number of patients in the experiment. • The rolling tacks 16 data involve flipping a common thumbtack 9 times. It consists of 320 pairs, (9, y i ), where y i represents the number of times the thumbtack landed point up. • The ulcer data consist of forty randomized trials of a surgical treatment for stomach ulcers conducted between 1980 and 1989 17,18 . Each of the 40 trials has an estimated log-odds ratio y s ( , ) that measures the rate of occurrence of recurrent bleeding given the surgical treatment. Throughout, we have used the maximum likelihood estimates (MLE) for estimating the initial starting value of the hyperparameters. However, one can use any other reasonable choice, which may involve expert's judgment. What is important to note is the shape of the d ; more specifically, its departure from uniformity, indicates the assumed conjugate prior g(θ; α, β) needs a 'repair' to resolve the prior-data conflict. For example, the flat shape of the estimated d in Fig. 1(b) indicates that our initial selection of g(θ; α, β) is appropriate for the terbinafine and ulcer data. Therefore, one can proceed in turning the "Bayesian crank" with confidence using the parametric beta and normal prior respectively.
In contrast, Fig. 1(a,c) provide a strong warning in using g = Beta(α, β) for the rat tumor and the rolling tacks experiments. The smooth estimated U-functions expose the nature of the discrepancy that exists between g and the observed data by having an "extra" mode. Clearly, the answer does not lie in choosing a different (α, β) as this cannot rectify the missing bimodality. This brings us to an important point: the full Bayesian analysis, by assigning hyperprior distribution on α and β, is not always a fail-safe strategy and should be practiced with caution (not in a blind mechanical way). The bottom line is uncertainty in the prior probability model ≠ uncertainty in α, β.
A foolproof prior uncertainty model, thus, has to allow ignorance in terms of the functional shape around g. The foregoing discussion motivates the following entropy-like measure of uncertainty.
Thus, the proposed measure captures the departure of the U-function from uniformity. The following result connects our qLP statistic with relative entropy. Theorem 1. The qLP uncertainty quantification statistic satisfies the following relation: ) where KL(Π||G) is the Kullback-Leibler (KL) divergence between the true prior π and its parametric approximate g. Proof. Express KL-information divergence using U-functions by substituting G(θ) = u: 2 . We conclude this section with a few additional remarks: • Our exploratory uncertainty diagnostic tool encourages "interactive" data analysis that is similar in spirit to Gelman et al. 19 . Subject-matter experts can use this tool to "play" with different hyperparameter choices in order to filter out the reasonable ones. This functionality might be especially valuable when multiple expert opinions are available. • When d shows evidence of the prior-data conflict, the question remains: what to do next? It is not enough to check the adequacy without informing the user an explanation for the misfit or what is the "deeper" structure that is missing in the starting parametric prior. Fortunately, our DS(G, m) model suggests a simple, yet formal, guideline for upgrading: ( ; , ) captures the patterns which were not a priori anticipated. Hence our formalism simultaneously addresses the problem of uncertainty quantification and the subsequent model synthesis.

Estimation Method
Theory. In this Section, we lay out the key theoretical results that we use for designing our algorithm. Before deriving the general expressions under the LP-DS(G, m) model, it is helpful to start by recalling the results for the basic conjugate model, i.e., Θ ~ DS(G, m = 0) and y f y ( ) Table 1 provides the marginal The subscript 'G' in these expressions underscores the fact that they are calculated for the conjugate g-model.
Next, we seek to extend these parametric results to LP-nonparametric setup in a systematic way. Especially, without deriving analytical expressions for each case separately, we want to establish a more general representation theory that is valid for all of the above and, in fact, extends to any conjugate pairs, explicating the underlying unity of our formulation.

Theorem 2. Consider the following model:
, G being the associated conjugate prior. Under this framework, the following holds: (a) The marginal distribution of y i is given by (c) For any general random variable h(Θ i ), the Bayes conditional mean estimator can be expressed as follows: The marginal distribution for DS(G, m)-nonparametric model can be represented as: Expanding the U-function in the LP-bases (1. 2) yields The next step is to recognize that Substituting (2.5) in the second term of (2.4) leads to Complete the proof of part (a) by replacing (2.6) into (2.4).

Family
Conjugate Table 1. Details on the distributions, their conjugate priors, and the resulting marginal and posterior distributions for four familiar distributions (two discrete and two continuous): Binomial, Poisson, Normal, and Exponential. For the normal-normal posterior λ 2 and in the marginal of the Poisson-gamma to denote the normalizing constant of beta distribution. For part (b) of posterior distribution calculation we have Our LP-Bayes recipe (2.1)-(2.3), admits some interesting overall structure: The usual 'parametric' answer multiplied by a correction factor involving LP[j;G,Π]'s. This decoupling pays dividends for theoretical interpretation as well as computation.
Algorithm. The critical parameters of our DS(G, m) model are the LP-Fourier coefficients, which, as is evident from (1.3), could be estimated simply by their empirical counterpart . But as we pointed out earlier, θ 1 , …, θ k are unobservable. How can we then estimate those parameters? While the θ i 's are unseen, it is interesting to note that they have left their footprints in the observables y 1 Following the spirit of the EM-algorithm, an obvious proxy for T j (θ i ; G) would be its posterior mean , which also naturally arises in the expression (2.1). This leads to the following 'ghost' LP-estimates: ( 1 , ), by virtue of the law of iterated expectations. These estimates can then be refined via iterations. The following algorithm implements this strategy.
We conclude this section with a few remarks on the algorithm: • Taking inspiration from I. J. Good's type II maximum likelihood nomenclature 20 , we call our algorithm Type-II Method of Moments (MOM), whose computation is remarkably tractable and does not require any numerical optimization routine. Results. In addition to the rat tumor data (cf. Section 1.2), here we introduce and analyze three additional datasets: two binomial and one Poissonian example.
• The surgical node data 22 involves number of malignant lymph nodes removed during intestinal surgery 22 .
Scientific REPORTS | (2018) 8:9983 | DOI:10.1038/s41598-018-28130-5 • The insurance data 24 , shown in Table 4, provides a single year of claims data for an automobile insurance company in Europe 24 . The counts y i ~ Poisson(θ i ) represent the total number of people who had i claims in a single year.  The rat tumor data shows a prominent bimodal shape, which should not come as a surprise in light of Fig. 1(a). For the surgical data, DS-prior puts excess mass around 0.4, which concurs with the findings of Efron [22, Sec.  In the case of the Navy shipyard data, our analysis corrects the starting "U" shaped Jeffreys prior to make it asymmetric with an extended peak at 0. This is quite justifiable looking at the proportions in the given data: (0/5, 0/5, 0/5, 1/5, 5/5). Finally, for the insurance data, the starting gamma prior requires a second-order (dispersion parameter) correction to yield a bona-fide π (2.13), which makes it slightly wider in the middle with sharper peak and tail.

Inference
MacroInference. A single study hardly provides adequate evidence for a definitive conclusion due to the limited sample size. Thus, often the scientific interest lies in combining several related but (possibly) heterogeneous studies to come up with an overall macro-level inference that is more accurate and precise than the individual studies. This type of inference is a routine exercise in clinical trials and public policy research.
Terbinafine data analysis. For the terbinafine data, the aim is to combine k = 41 treatment arms with varying event rates and produce a pooled proportion of patients who withdrew from the study because of the adverse effects of oral anti-fungal agents. Recall that our U-function diagnostic in Fig. 1(b) indicated the parametric beta-binomial model with MLE estimates α = 1.24 and β = 34.7 as a justifiable choice for this data. Thus the adverse event probabilities across k = 41 studies can be summarized by the prior mean = .
α α β + 0 034. We apply parametric bootstrap using DS(G, m)-sampler (see Supplementary Appendix C) with m = 0 to compute the standard error (SE): 0.034 ± 0.006, highlighted in the Fig. 3(b). If one assumes a single binomial distribution for all the groups (i.e., under homogeneity), then the 'naive' average y n / would lead to an overoptimistic biased estimate 0.037 ± 0.0034. In this example, heterogeneity arises due to overdispersion among the exchangeable studies. But there could be other ways too. An example is given in the following case study. = . .   Rat tumor and rolling tacks data analysis. Can we always extract a "single" overall number to aptly describe k parallel studies? Not true, in general. In order to appreciate this, let us look at Fig. 3(a,c), which depict the estimated DS-prior for the rat tumor and rolling tacks data. We highlight two key observations: 1. Mixed population. The bimodality indicates the existence of two distinct groups of θ i 's. We call this "structured heterogeneity, " which is in between two extremes: homogeneity and complete heterogeneity (where there is no similarity between the θ i 's whatsoever). The presence of two clusters for the rolling tacks data was previously detected by Jun Liu 25 . The author further noted, "Clearly, this feature is unexpected and cannot be revealed by a regular parametric hierarchical analysis using the Beta-binomial priors. " One plausible explanation for this two-group structure was attributed to the fact that the tack data were produced by two persons with some systematic difference in their flipping. On the other hand, the bimodal shape of the rat example was not previously anticipated 14,26,27 . The resulting two groups of rat tumor experiments are enumerated in the Table 2. Although we do not have the necessary biomedical background to scientifically justify this new discovery, we are aware that potentially numerous factors (e.g., experimental design, underlying conditions, selection of specific groups of female rats) may contribute to creating this systemic variation. 2. From single mean to multiple modes. An attempt to combine the two subpopulations using a single prior mean (as carried out for the terbinafine example) would result in overestimating one group and underestimating another. We prefer modes of  π θ ( ), along with their SEs, as a good representative summary, which can be easily computed by the nonparametric smooth bootstrap via DS(G, m) sampler.
Learning from big heterogeneous studies is one of the most important yet unsettled matters of modern macroinference 18,28 . Our key insight is the realization that the 'science of combining' critically depends on the shape of the estimated prior. One interesting and commonly encountered case is multimodal structure of the learned prior. In such situations, instead of the prior-mean summary, we recommend group-specific modes. Our algorithm is also capable of finding data-driven clusters of the partially exchangeable studies in a fully automated manner.
Learning From Uncertain Data. An important problem of measurement science that routinely appears in metrology, chemistry, physics, biology, and engineering can be stated as follows: measurements are made by k different laboratories in the form of y 1 , …, y k along with their estimated standard errors s 1 , …, s k . Given this uncertain data, a fundamental problem of interest is inference concerning: (i) estimation of the consensus value of the measurand, and (ii) evaluation of the associated uncertainty. The data in Table 3 are an example of such an inter-laboratory study involving k = 28 measurements for the level of arsenic in oyster tissue. The study was part of the National Oceanic and Atmospheric Administration's National Status and Trends Program Ninth Round Intercomparison Exercise 29 .
Arsenic data analysis. We start with the DS-measurement model: Y s ( , ) . The shape of the estimated U-function in Fig. 4(a) indicates that the pre-selected prior ( 13 22, 1 85 ) 2 2μ τ = .
= .  is clearly unacceptable for arsenic data, thereby disqualifying the classical Gaussian random effects model 30 . The DS-corrected  π shows some interesting asymmetric pattern with two-bumps. The left-mode represents measurements from three laboratories that are unlike the majority. The result of our macro-inference is shown in Fig. 4(b), which delivers the consensus value 13.6 ± 0.24. This is clearly far more resistant to fairly extreme low measurements and surprisingly, also more accurate when compared to the parametric EB estimate 13.22 ± 0.26. Most importantly, our scheme provides an automated solution to the fundamental problem of which (as well as how) measurements from the participating laboratories should be combined to form a best consensus value. Possolo 31 fits a Bayesian hierarchical model with prior as Student's t ν , where the degrees of freedom was also treated as a random variable over some arbitrary range {3, …, 118}. Although a heavy-tailed Student's t-distribution is a good choice to 'robustify' the analysis, it fails to capture the inherent asymmetry and the finer modal structure on the left. Distinguishing long-tail from bimodality is an important problem of applied statistics by itself.
To summarize, there are several attractive features of our general approach: (i) it adapts to the structure of the data, yet (ii) allows the use of expert opinion to go from knowledge-based prior to statistical prior; (iii) if multiple expert opinions are available, one can also use the U-diagnostic for reconciliation-exploratory uncertainty assessment; (iv) it avoids the questionable exercise of detecting and discarding apparently unusual measurements 32 , and finally (v) our theory is still applicable for very small number of parallel cases (cf. Fig. 2(c)), a situation which is not uncommon in inter-laboratory studies.
MicroInference. The objective of microinference is to estimate a specific microlevel θ i given y i . Consider the rat tumor example where, along with earlier k = 70 studies, we have an additional current experimental data, that shows y 71 = 4 out of n 71 = 14 rats developed tumors. How can we estimate the probability of a tumor for this new clinical study? There could be at least three ways to answer this question: . For smaller (n i ) studies, shrinkage intensity is higher, which allows them to learn from other experiments. • Nonparametric Elastic-Bayes estimate: Is it a wise strategy to shrink all  i θ 's toward the grand mean 0.14? Interestingly, this shrinking point is near the valley between the twin-peaks of the rat tumor prior density estimate (verify from Fig. 3(a)) and therefore may not represent a preferred location. Then, where to shrink? Ideally, we want to learn only from the relevant subset of the full dataset-selective shrinkage, e.g., for rat data, it would be the group 2 of Table 2. This brings us to the question: how to rectify the parametric empirical Bayes estimate θˇi? The formula (2.3) gives us the required (nonlinear) adjusting factor:ˇ 2) reproduces the parametric θ ǐ . Due to its flexibility and adaptability, we call this the Elastic-Bayes estimate. This can be considered as a nonparametric class of shrinkage estimators that starts with the classical Stein's formula and rectifies it by looking at the data.
Rat tumor example. Figure 5 compares Stein's empirical Bayes estimate with our Elastic-Bayes estimate for the all k = 70 tumor rates. Posterior mean, median, and mode of θ j 's are shown side by side in three plots. The departure from the 45° reference line is a consequence of "adaptive shrinkage. " Elastic-Bayes automatically shrinks the empirical θ  i towards the representative modes (0.034 and 0.156), whereas the Stein's PEB estimate uses the grand mean (≈0.14) as the shrinking target for all the tumor rates. This is particularly prominent in Fig. 5(c) for maximum a posteriori (MAP) estimates. As before, for heterogeneous population, we prescribe posterior mode as the final prediction.
The Pharma-example. Our DS Elastic-Bayes estimate is especially powerful in the presence of prior-data conflict. To illustrate this point, we report a small simulation study. The goal is to compare MSE for frequentist MLE, parametric empirical Bayes, and nonparametric Elastic-Bayes estimates for a new study y new in various levels of prior-data conflict. To capture the prior-data conflict, we consider the following model for π(θ) and y new : The parameter η varies from 0 to 0.50 in increments of 0.05; as η increases we introduce more heterogeneity into the true prior distribution and exacerbate the prior-data conflict between π(θ) and y new ; see Fig. 6(a). We simulated k = 100 θ i from π(θ), with which we generate θ θ | ∼ y Bin(60, ) i i i . Using the Type-II MoM algorithm on the simulated data set, we found π. After generating y new , we then determined the frequentist MLE, parametric EB (PEB), and the nonparametric elastic Bayes estimates of the mode. For each value of η, we repeated this process 250 times and found the mean squared error (MSE) for each estimate. To better illustrate the impact of prior-data conflicts, we used ratio of PEB MSE to frequentist MSE and PEB MSE to DS MSE. The results are shown in Fig. 6(b).
The Elastic-Bayes estimate outperforms the Stein's estimate for all η. More importantly the efficiency of our estimate continues to increase with the heterogeneity. This is happening because elastic Bayes performs selective shrinkage of sample proportion towards the appropriate mode (near 0.3) and thus gains "strength" by combining information from 'similar' studies even when the contamination in the study population increases. An interesting observation is the performance of the frequentist MLE estimate; as the data becomes more heterogeneous, the frequentist MLE shows improvement with respect to the Stein's PEB estimate. Our simulation depicts a scenario that is very common in historic-controlled clinical trials, where the heterogeneity arises due to changing conditions. Additional comparisons with other empirical Bayes procedures can be found in Supplementary Appendix G. Figure 7 shows the posterior plots for specific studies in four of our data sets: surgical node, rat tumor, Navy shipyard, and rolling tacks. In studies like the surgical node data, personalized predictions are typically valuable. Figure 7(a) shows posterior distributions for three selected patients, which are indistinguishable from Efron's deconvolution answer 35 [Fig. 4]; the patient with n i = 32 and y i = 7 shows almost certainly θ i > 0.5, i.e., he or she is highly prone to positive lymph nodes, and thus should be referred to follow-up therapy. With regard to the rat tumor data, Fig. 7(b) depicts the DS-posterior distribution of θ 71 along with its parametric counterpart π G (θ 71 |y 71 , n 71 ). Interestingly, the DS nonparametric posterior shows less variability; this possibly has to do with the selective learning ability of our method, which learns from similar studies (e.g. group 2), rather than the whole heterogeneous mix of studies. We see similar phenomena in the rolling tacks data, where panel (d): y i = 3, is more reflective of the first mode and panel (f): y i = 8, of the second. Panel (e) shows the bimodal posterior for y i = 6 case. Finally, the Navy shipyard data (Fig. 7(c)) exhibits another advantage of DS priors: it works equally well for small k. The DS-posterior mean estimate for y 6 = 0 is 0.0471, which is consistent with the findings of Sivaganesan and Berger 36  for the rat tumor data. Panel (c) displays ˆy ( 0 ) 6 6 π θ | = for the Navy shipyard data. The second row shows the posterior distributions of (d) y i = 3, (e) y i = 6, and (f) y i = 8 from the rolling tacks data. The Third Culture. Each EB modeling culture has its own strengths and shortcomings. For example, PEB methods are extremely efficient when the true prior is Gamma. On the other hand, the NEB methods possess extraordinary robustness in the face of a misspecified prior yet they are inefficient when in fact π α β ≡ Gamma( , ). Noticing this trade-off, Robbins raised the following intriguing question 10 : how can this efficiency-robustness dilemma be resolved in a logical manner? To address this issue, we must design a data analysis protocol that offers a mechanism to answer the following intermediate modeling questions (before jumping to estimate π ): Can we assess whether or not a Gamma-prior is adequate in light of the sample-information? In the event of a prior-data conflict, how can we estimate the 'missing shape' in a completely data-driven manner? All of these questions are at the heart of our 'Bayes via goodness-of-fit' formulation, whose goal is to develop a third culture of generalized empirical Bayes (gEB) modeling by uniting the parametric and non-parametric philosophies. Compute the DS Elastic-Bayes estimate by substituting i 2), which reduces to the PEB answer when Π ≡ d u G ( ; , ) 1 (i.e, the true prior is a Gamma) and modifies non-parametrically, only when needed; thereby turning Robbins' vision into action (see Supplementary Appendices A and G for more discussions on this point).

Three additional real examples.
The insurance data. Table 4 reports the Bayes estimates Y y [ ]  θ| = for the insurance data. We compare five methods: parametric Gamma, classical Robbins' EB, Efron's Deconvolve, Koenker's NPMLE, and our procedure. The raw-nonparametric Robbins' estimator is clearly erratic at the tail due to data-sparsity. The PEB estimate overcomes this limitation and produces a stable estimate; but is it dependable? Should we stop here and report this as our final result? Our exploratory U-diagnostic tells that (consult Sec. 2.3) the PEB estimate needs a second-order correction to resolve the discrepancy between the Gamma prior and data. The improved LP-nonparametric Stein estimates are shown in the last row of Table 4. The butterfly data. The next example is Corbet's Butterfly data 37 -one of the earliest examples of empirical Bayes. Alexander Corbet, a British naturalist, spent two years in Malaysia trapping butterflies in the 1940s. The data consist of the number of species trapped exactly y times in those two years for y = 1, …, 24. Figure 8(b) plots different Bayes estimates. The Robbins' procedure suffers from similar 'jumpiness. ' The blue dotted line represents the linear PEB estimate with α = 0.104 and β = 89.79 (same as of Efron and Hastie 24 , Eq. 6.24) estimated from the zero-truncated negative binomial marginals. Our DS-estimate is almost sandwiched between the PEB and Deconvolve answer. The NPMLE method (the orange curve) yields some strange looking sinusoidal pattern, probably due to overfitting. In conclusion, we must say that the triumph of our procedure as compared to the other Bayes estimators lies in its automatic adaptability that Robbins alluded in his 1980 article 10 .

Discussions
We laid out a new mechanics of data modeling that effectively consolidates Bayes and frequentist, parametric and nonparametric, subjective and objective, quantile and information-theoretic philosophies. However, at a practical level, the main attractions of our "Bayes via goodness-of-fit" framework lie in its (i) ability to quantify and protect against prior-data conflict using exploratory graphical diagnostics; (ii) theoretical simplicity that lends itself to analytic closed-form solutions, avoiding computationally intensive techniques such as MCMC or variational methods.
We have developed the concepts and principles progressively through a range of examples, spanning application areas such as clinical trials, metrology, insurance, medicine, and ecology, highlighting the core of our approach that gracefully combines Bayesian way of thinking (parameter probability where prior knowledge can be encoded) with a frequentist way of computing via goodness-of-fit (evaluation and synthesis of the prior distribution). If our efforts can help to make Bayesian modeling more attractive and transparent for practicing statisticians (especially non-Bayesians) by even a tiny fraction, we will consider it a success.