Abstract
Monte Carlo is famous for accepting model extensions and model refinements up to infinite dimension. However, this powerful incremental design is based on a premise which has severely limited its application so far: a statevariable can only be recursively defined as a function of underlying statevariables if this function is linear. Here we show that this premise can be alleviated by projecting nonlinearities onto a polynomial basis and increasing the configuration space dimension. Considering phytoplankton growth in lightlimited environments, radiative transfer in planetary atmospheres, electromagnetic scattering by particles, and concentrated solar power plant production, we prove the realworld usability of this advance in four test cases which were previously regarded as impracticable using Monte Carlo approaches. We also illustrate an outstanding feature of our method when applied to acute problems with interacting particles: handling rare events is now straightforward. Overall, our extension preserves the features that made the method popular: addressing nonlinearities does not compromise on model refinement or system complexity, and convergence rates remain independent of dimension.
Introduction
The standard Monte Carlo (MC) method is a technique to predict a physical observable by numerically estimating a statistical expectation over a multidimensional configuration space^{1}. The reason why this method is so popular in all fields of scientific research is its intuitive nature. In general, simulation tools are designed in direct relation to the physical phenomena present in each discipline, and later refinements are gradual and straightforward. Model refinements merely extend sampling to other appropriate dimensions. The method is nonetheless mathematically rigorous: specialists specify observables that are implicitly translated into integral quantities which are estimated using random sampling in each direction of the configuration space. This statistical approach is highly powerful because the algorithm can be designed directly from the description of the system, whether it is deterministic or not, with no reworking or approximation.
Let us illustrate how MC is used in engineering with a typical example: the optimal design of a concentrated solar plant^{2} (see Fig. 1a). The power collected by the central receiver results from all the rays of sunlight that reach it after reflection by heliostats, so it depends on the complex geometry of the heliostats. Moreover, the heliostats change their orientation to follow the sun’s position, so they can mask one another at certain times of the day. To estimate by MC the received power at one moment of interest, i.e. for a given geometry of the heliostats: choose an optical path among those that link the sun to the central receiver via a heliostat; check whether this path is shadowed or blocked by another heliostat; and retain a Monte Carlo weight equal to 0 or 1 depending on transmission success. Let X be the random variable denoting transmission success. The collected fraction of the available sun power is then the expectation \({ {\mathcal E} }_{{\bf{X}}}({\bf{X}})\) of X, and can be evaluated with no bias as the average of such weights over a large number of sampled paths.
This approach robustly complies with expanded descriptions of the physical observable to be addressed. For instance, the fraction of the available sun power collected on average over the entire lifetime of the solar plant (typically 30 years) can be predicted as the expectation over time of \({ {\mathcal E} }_{{\bf{X}}}({\bf{X}})\), which varies with time. Denoting \({ {\mathcal E} }_{{\bf{X}}{\bf{Y}}}({\bf{X}}{\bf{Y}})\) the collected fraction at random time Y within the 30 years, the timeaveraged fraction is given by \({ {\mathcal E} }_{{\bf{Y}}}({ {\mathcal E} }_{{\bf{X}}{\bf{Y}}}({\bf{X}}{\bf{Y}}))={ {\mathcal E} }_{{\bf{Y}},{\bf{X}}{\bf{Y}}}({\bf{X}}{\bf{Y}})\). The basic algorithm above can then be encapsulated within time sampling: first choose a date for Y, then pick a path at that date for XY. Finally, estimate \({ {\mathcal E} }_{{\bf{Y}},{\bf{X}}{\bf{Y}}}({\bf{X}}{\bf{Y}})\) by computing the average transmission success over all combined pairs (date, path). Meanwhile, sun power fluctuations can be accounted for by estimating the atmospheric transmission at each chosen date. The choice of the statistical viewpoint thus enables us to incorporate into one single statistical question as many elements as necessary: the geometrical complexity of the heliostats^{3}, the daily course of the sun, and seasonalscale as well as hourlyscale weather fluctuations^{4}. Remarkably, the latter question is nearly as simple to address as the estimation of the power collected at one single date: the algorithmic design can map the full conceptual description, yet computational costs are hardly affected. Contrastingly, deterministic approaches would translate into impractical computation times or require simplified and approximate descriptions, so MC has become the only practical solution in many engineering contexts of this type. Having become standard practice, MC has prompted numerous theoretical developments^{5,6,7,8}.
Nevertheless, MC has so far not been able to handle every question. In fact, it was identified early on that “the extension of Monte Carlo methods to nonlinear processes may be impossible”^{9} and it is a prevalent opinion nowadays that “Monte Carlo methods are not generally effective for nonlinear problems, mainly because expectations are linear in character”^{10}, so that “a nonlinear problem must usually be linearized in order to use the Monte Carlo technique”^{11}. We are aware of only one attempt so far to bypass this failing: the recent proposal by the applied mathematics community^{1,12,13,14} to use branching processes^{15} to solve Fredholmtype integral equations with polynomial nonlinearity.
Unfortunately, most realworld problems are nonlinear. Indeed, if the question were now to evaluate the final return on investment of the solar plant, namely how much electrical power it would deliver over its lifetime, standard MC would fail, because the instantaneous conversion efficiency from collected solar power to electrical power is not linear. Let us consider, as a toy example, a basic nonlinear case where the electrical power would be proportional to the square of the instantaneous collected solar power \({ {\mathcal E} }_{{\bf{X}}{\bf{Y}}}({\bf{X}}{\bf{Y}})\) at date Y. In MonteCarlo terms, the question would then be to estimate \({ {\mathcal E} }_{{\bf{Y}}}({ {\mathcal E} }_{{\bf{X}}{\bf{Y}}}{({\bf{X}}{\bf{Y}})}^{2})\) over the plant’s lifetime. In this case, the optical and temporal expectations can no longer be combined, because it would be wrong to first estimate, as above, the total solar power collected over its lifetime, and then apply the conversion efficiency at the end (basically, \({ {\mathcal E} }_{{\bf{Y}}}({ {\mathcal E} }_{{\bf{X}}{\bf{Y}}}{({\bf{X}}{\bf{Y}})}^{2})\ne { {\mathcal E} }_{{\bf{Y}}}{({ {\mathcal E} }_{{\bf{X}}{\bf{Y}}}({\bf{X}}{\bf{Y}}))}^{2}\), in the same way as a^{2} + b^{2} ≠ (a + b)^{2}). Instead, we would have to sample dates (say M dates, millions over 30 years), estimate the solar power collected at each date by averaging transmission successes over numerous optical paths (say N paths, millions for each date), apply a nonlinear conversion to the result at that date, and then average over all dates^{16}. Doing so, MC would now require M × N samples, and even worse, further levels of complexity (each adding a nonlinearity to the problem) would similarly multiply the computation time. Moreover, the result would be biased due to the finite sampling sizes of the innermost dimensions. In short, MC’s distinctive features are no longer available, and exact lifetime integration appears impossible.
Bearing in mind our earlier theoretical works about MC integral formulations^{2}, we have found a way to bypass this obstacle for a large class of nonlinear problems, based on the very statistical nature of MC. In the case of our toy example, we use the fact that:
where X_{1} and X_{2} are two independent variables, identically distributed as X (see Methods). Translated into a sampling algorithm, the solution is now to sample optical paths in pairs (X_{1}, X_{2})Y (instead of millions) at each sampled date, and then to retain the pair product X_{1}X_{2}Y of their transmission successes. The optical and temporal statistics can then actually be sampled together, and yield the unbiased result with no combinatorial explosion. This reformulation can be generalised to any nonlinearity of polynomial shape. First, monomials of any degree can indeed be estimated using the same sampling property as that used above for n = 2:
where X_{i} are n independent random variables, identically distributed as X. For any monomial of degree n, the expectation can then be computed by sampling series of n independent realisations of XY, and averaging the series products. The linear case, solved by standard MC, corresponds to n = 1. Secondly, since polynomials are simply linear combinations of monomials, the expectation for any polynomial function of \({ {\mathcal E} }_{{\bf{X}}{\bf{Y}}}({\bf{X}}{\bf{Y}})\) of degree n can be translated into a Monte Carlo algorithm, first sampling a degree in the polynomial, and then sampling as many independent realisations of XY as this random degree (see Methods). For a polynomial function of degree n, the corresponding NonLinear Monte Carlo (NLMC) algorithm is then:

pick a sample y of Y,

choose a monomial degree value d ≤ n,

draw d independent samples of XY = y and retain their product,
repeat this sampling procedure and compute the estimate as the average of the retained products.
Moreover, if polynomial forms of any dimension are now solvable with no approximation, so is the projection of any nonlinear function onto a polynomial basis of any dimension, even of infinite dimension if required (full details of using the Taylor expansion are given in Methods). As a result, any hierarchy of nested statistical processes that combine nonlinearly can now, in theory, be exactly addressed within the Monte Carlo framework. The deep rationale of the proposed algorithm is therefore to transform a nonlinear process into a formally equivalent linear infinitedimension process, and then use the inherent capability of Monte Carlo to address expectations over domains of infinite dimension.
To the best of our knowledge, this analysis has never before been performed. However, it has major practical consequences for realworld problems, provided the polynomial sampling, which is the price to be paid for tackling nonlinearities exactly, remains tractable. For instance, let us go back to our solar power plant example, and now use the actual expression for the conversion rate and its Taylor expansion: for each date, once a sun position and climate conditions have been fixed, we would have to pick a random number of independent optical paths (instead of one optical path in the linear case), keep the product of their transmission success, and finally calculate the average of many such products. Doing so, it becomes possible to integrate hourly solar input fluctuations over 30 years in the full geometry of a kilometrewide heliostat field in order to optimise the nonlinear solartoelectric conversion over the plant lifetime (Fig. 1a). The same line of thought can be used to predict wave scattering by a tiny complexshaped scatterer^{17} such as a helicoidal cyanobacterium (Fig. 1b). The biomass production example (Fig. 1c), where incoming light favours the photosynthetic growth that in turn blocks the incoming light, illustrates how our method also handles nonlinear feedback^{18}. Finally, with the estimation of Earth’s radiative cooling (Fig. 1d), we reproduce quite classic results, yet with a purely statistical approach: by sampling directly the state transitions of greenhouse gases, we avoid costly deterministic computations that the standard linear Monte Carlo approach requires in order to bypass the nonlinearity of the Beer Extinction Law^{19}. In each of the four cases, it appears that the additional computations are well affordable using only ordinary computing power (the complete physical descriptions of the four problems, the nonlinearities involved and their translation in NLMC can be found in their respective Extended Data Figures and Supplemental Information, Solar Plant: SI1; Complexshaped Scatterer: SI2; Biomass production: SI3; Earth radiative cooling: SI4).
For these four realworld simulation examples, we can therefore retain that the variance of the proposed statistical estimate was very much satisfactory. Is that a general feature? Can we feel confident when applying this simulation strategy to any new nonlinear problem? More generally speaking, what do we claim about the status of the present research? Essentially, we only argue that the general proposition of the present paper is immediately available for an ensemble of pratical applications. Indeed, these four simulation examples are representative of a quite wide ensemble of physics/engineering practices and the corresponding implementations are now practically used by the corresponding researchcommunities^{17,19,20,21}. Moreover, implementation only required an up to date knowledge of Monte Carlo techniques: the probability sets were selected using nothing more than very standard importancesampling reasoning (see Methods, SI1 and SI3). Outside these experiments, we did not explore in any systematic manner the statistical convergence difficulties that could be specifically associated to the proposition. But although we did not yet encounter it, we can already point out a potential source of variance related to the choice of the fixed point x_{0} around which the nonlinear function is Taylor expanded (see Methods).
From a theoretical point of view, in the four cases exposed above, the model is directly enunciated in statistical terms, defining two random variables X and Y from the start. More broadly, standard MC practice can also start from a deterministic description (see Methods), most commonly from a linear partial differential equation (PDE). The formal equivalence between the solution of a linear PDE and the expectation of a random variable has long been established^{22}. Indeed, PDEtoMC translations are essential to nanoscale mechanics (Quantum Monte Carlo^{23}) or nuclear sciences. NLMC allows such translations for nonlinear PDEs.
As an illustration of the groundbreaking nature of our study, we address a prominent example of a nonlinear PDE in statistical physics, the Boltzmann equation, which governs the spatiotemporal density of interacting particles in a dilute gas (full details in SI5). The corresponding physics is easy to visualise: a particle simply follows its ballistic flight until it collides with another particle. The collisions are considered as instantaneous and only modify the two particle velocities. The equation for the variation in particle density in phasespace (position, velocity) is nonlinear because the collision rate depends on the density itself. In order to project this nonlinearity onto the proper polynomial basis of infinite dimension, this PDE is first translated into its Fredholm integral counterpart (a step reminiscent of the aforementioned Dimov proposition^{1}). This Fredholm integral expresses the density in phase space at some location for some velocity at some time, as if putting a probe into spacetime. It is estimated by Monte Carlo, tracking the dynamics backwards in time up to the initial condition (or boundary conditions). Importantly, such a probe estimation does not require the exhaustive resolution of the whole field at previous times: as in standard backwards MC algorithms for solving linear transport (e.g. simulating an image by tracking photonpaths backward, from receiver to source^{24,25,26}) the information about previous states of the field is reconstructed along each path only where and when it is required^{27}. Here, the contrast with linear MC is that nonlinearity due to collisions translates into branching paths.
This extension deals very efficiently with extremely rare events because it preserves an essential feature of MC: by avoiding time/space/velocity discretisation^{28,29,30}, very low densities can be estimated with no bias, and the only source of uncertainty is the finite number of sampled events (i.e. the confidence interval around the estimated density). As a test, we consider a case for which analytical solutions have been published: Krook’s early analysis of the distribution of speeds in extremely outofequilibrium conditions^{31,32}. Krook’s analysis was outstanding because it provided an analytical solution to a problem which looked impossible to solve numerically: events with the greatest consequences, namely the particles with the highest energies (i.e. highspeed particles, of tremendous importance in nuclear chemistry) lie far out in the tail of speed distribution and have a very low probability of occurrence (rare events). Using our NLMC design, the fractions of particles which have a kinetic energy higher than 10^{6} times the average value, and which correspond to a fraction as low as 10^{−11} of the total, can now be quantified as accurately as desired, and perfectly fit the analytical solution (Fig. 2a).
Having been validated in Krook’s case, this extension opens the way to solving systems for which no analytical solutions are available. As an example, we now consider a fully spatialised system in which the particles are confined by an outside harmonic potential, leading to a socalled breathing mode of the gas density. Such a solution to the Boltzmann equation was identified early on by Boltzmann himself ^{33}, but has recently been revisited and generalised in the context of a shortcut to adiabaticity techniques for classic gases^{34}. Exact solutions are available only under the constraint that the gas is at local equilibrium, in which case the density displays a permanent oscillation. Here again, these analytical solutions are exactly recovered. Moreover, NLMC enables us to go beyond this constraint and to explore the gas behaviour when the local equilibrium constraint is alleviated: starting from a state far from local equilibrium, it is now possible to estimate how fast the velocity redistribution induced by collisions actually dampens the oscillation (Fig. 2b).
Conclusions
From now on, the Monte Carlo Method is no longer restricted to linear problems. The five examples exposed above were worked out by teams comprising specialists of the Monte Carlo method and specialists of the physical problem under consideration. Through their complete description, we offer readers all the details to implement their own applications. As a guideline, the first step is to formulate the physical observable under its expectation form, including the nonlinearities and integrating all levels of complexity. The second step is to reformulate this expectation as a formulation compliant with the standard Monte Carlo Method, according to the type of nonlinearity. For polynomial nonlinearities, use i.i.d. series products. For other differentiable forms, use a Taylor expansion around an upper bound of the innermost random variable in order to regain a polynomial form. Using this MCcompliant formulation, every advanced MC technique can then be applied: parallel implementation, complex geometry, null collisions, zero variance, control variate, importance sampling, sensitivity analysis, and so on. As illustrated by the variety of our seminal examples, this guideline covers a large set of nonlinear academic and realworld problems.
Methods
Basics of Monte Carlo Methods
Let us estimate E = 1 + 4 by repeatedly tossing a (fair) coin. The tossing process is described by a random variable R ∈ {0, 1} which takes the value 0 for heads (probability \({P}_{R}(0)=\tfrac{1}{2}\)) and 1 for tails (probability \({P}_{R}(1)=\tfrac{1}{2}\)).
Now, to estimate any process (e.g. a process output: E = 1 + 4), we can assign arbitrary weights w(R) to values {0, 1} in order to write E as an expectation of the weighted process, following:
with \(w(0)=\tfrac{1}{{P}_{R}(0)}=2\) and \(w(1)=\tfrac{4}{{P}_{R}(1)}=8\) and where \({ {\mathcal E} }_{R}\) denotes the expectation with respect to R. Using the results r_{1} … r_{N} of N successive tosses (independent realisations of R), we can then estimate \(E={ {\mathcal E} }_{R}(w(R))\) from the weighted average of the toss results \(\frac{1}{N}\,{\sum }_{i=1}^{N}\,w({r}_{i})\) since E = 5 is indeed the average of Monte Carlo weights that take the values 2 and 8 with equal probabilities.
Such an approach is at the base of Monte Carlo techniques: define the weights according to the problem to be solved, sample the process repeatedly, and take the average. Depending on the physical description of the value to be estimated, this approach still holds for an infinite number of terms and can also be extended to integral formulation using continuous random variables:
which can be estimated by \(\frac{1}{N}\,{\sum }_{i=1}^{N}\,w({{\bf{y}}}_{i})\), where the y_{i} are N realisations of the random variable Y with probability density function p_{Y} and domain of definition \({{\mathscr{D}}}_{{\bf{Y}}}\).
Basics of Nonlinear Monte Carlo Methods
In order to estimate
we introduce two independent variables X_{1} and X_{2}, identically distributed as X (still conditioned by the same Y):
Since X_{1} and X_{2} are independent, and conditionally independent given Y:
Hence
The same demonstration can be made to establish that:
Let us now assume that the weights associated with the random variable Y are described by a nonlinear function f(Z_{Y}) of the conditional expectation \({{\bf{Z}}}_{{\bf{Y}}}={ {\mathcal E} }_{{\bf{X}}{\bf{Y}}}({\bf{X}}{\bf{Y}})\). The problem then becomes to compute:
Such a nonlinearity can be treated with no approximation using a projection on an infinite basis. In all the examples presented in this article, we have used a Taylor polynomials basis, which means that f(x) is expanded around x_{0}
We note that both x_{0} and f can be conditioned by Y. Now, following the same line as explained above for the Basics of Monte Carlo Methods, we regard the sum in the expansion of f as an expectation, writing:
where the random variable N (of probability law P_{N}) is the degree of one monomial in the Taylor polynomial. This step only requires us to define one infinite set of probabilities (instead of two in Eq. 3), with \({\sum }_{n=0}^{+\infty }\,{P}_{N}(n)=1\).
Equation 10 can then be rewritten as:
Defining independent and identically distributed random variables X_{q}, with the same distribution as X, the innermost term rewrites
or, equivalently:
Since the variables X_{q}Y are independent in the innermost term, we have:
so that:
and we finally have:
which can be read as:
with
With the notation above, \({\prod }_{q=1}^{0}\,({{\bf{X}}}_{q}{\bf{Y}}{x}_{0})=1\).
The translation into a Monte Carlo algorithm then follows:

sample a realisation y of Y (and set x_{0} and f accordingly if they depend on y)

sample a realisation n of N

sample n independent realisations x_{q=1,…,n} of X conditioned by y

keep
and estimate E as the average of many realisations \(\hat{w}\).
Implementation example
Let us illustrate the choice of the discrete distribution P on N with an implementation example. We take Y uniformly distributed over [0, 1], XY uniformly distributed over [0, Y] and f(x) = 1/(1 + x) (f corresponds to the photobioreactor realworld example in Fig. 1c, with C = 1, α = 0, β = −1, K_{r} = 1). Equation 10 becomes
Its analytical solution is E = 2 ln(3/2).
Injecting the nth derivative \({{\rm{\partial }}}^{n}f({x}_{0})=\frac{n!\,{(1)}^{n}}{{(1+{x}_{0})}^{n+1}}\) into Equation 20 leads to
that can be reformulated as
Using standard importancesampling reasoning, we choose the set of probabilities that cancels the term \(\frac{{x}_{0}^{N}}{{(1+{x}_{0})}^{N+1}}\) in the estimator:
with
The NLMC algorithm is

sample a realisation y of Y

sample a realisation n of N according to the discrete distribution in Equation 24

sample n independent realisations x_{q=1,…,n} of X uniformly distributed over [0, y]

keep
and estimate E as the average of M realisations \(\hat{w}\).
We define the computational cost C of this algorithm in terms of the total number of random generations that are required to achieve 1% standard deviation on the estimation. Each realisation of the algorithm includes 1 random generation y of Y, 1 random generation n of N, and n random generations of X. And it takes M_{1%} realisations of the algorithm to achieve a standard deviation of 1%. Overall,
since \( {\mathcal E} (N)={x}_{0}\) with the discrete distribution in Equation 24. Figure 3 shows the values of M_{1%} and C recorded with simulations, as a function of x_{0}. The choice of x_{0} alone controls both the statistical convergence (i.e. M_{1%}) and the computational cost (through the discrete distribution P on N). We observe a tradeoff between estimation and computational cost. For low values of x_{0}, only few realisations of X are needed since the discrete distribution on N is rapidly decreasing with n, but a large number of realisation of the algorithm are required for the estimation (i.e. \( {\mathcal E} (N)\) is small but M_{1%} is large). At the opposite, for larger values of x_{0}, the estimator converges rapidly, but the average number of X random generations per Monte Carlo realisations is increased (i.e. M_{1%} is small but \( {\mathcal E} (N)\) is large). In between, we observe an optimal choice of x_{0}.
Comparison with the naive plugin estimator and convergence issues
In the previous implementation example solving equation 21, a naive plugin estimator could be constructed as^{16}:
leading to the following Monte Carlo algorithm:

sample a realisation y of Y

sample K independent realisations x_{q=1,…,K} of X uniformly distributed over [0, y]

keep
and estimate E as the average of M realisations \(\hat{w}\). With this algorithm, K must ensure that the bias of the estimator can be neglected. For that purpose, we choose the value K_{1%} that always gives an estimation of \({ {\mathcal E} }_{XY}\) with 1% standard deviation. Therefore, each realisation of the algorithm includes 1 random generation for Y and K_{1%} generations for X, and it takes M_{1%} realisations of the algorithm to achieve a standard deviation of 1% on the estimation of E. The computational cost of the naive algorithm is therefore M_{1%}(1 + K_{1%}). In the present example, K_{1%} = 3333 and we observed that M_{1%} = 140 thanks to numerical simulations: the cost is C = 466760. Compared to the results in Fig. 3, even for this very simple example where Y and XY have little variance, the computational cost of the naive plugin algorithm is 100 times higher than that of the NLMC estimator. Nevertheless, this conclusion only stands for a reasonable choice of x_{0} (and therefore of P_{N}). Indeed, the computational cost with the NLMC estimator seems to rise up to infinity when x_{0} approaches 0 (see Equation 23 and Fig. 3): even the naive plugin algorithm would then be a better choice. Although we did not further analyse this observation with theoretical means, we can at least retain that chosing x_{0} is likely to be an essential step of the present approach as far as computational costs are concerned.
References
 1.
Dimov, I. T. & McKee, S. Monte Carlo Methods for Applied Scientists (World Scientific Publishing, 2008).
 2.
Delatorre, J. et al. Monte Carlo advances and concentrated solar applications. Sol. Energy 103, 653–681 (2014).
 3.
Siala, F. M. F. & Elayeb, M. E. Mathematical formulation of a graphical method for a noblocking heliostat field layout. Renew. energy 23, 77–92 (2001).
 4.
Farges, O. et al. Lifetime integration using Monte Carlo Methods when optimizing the design of concentrated solar power plants. Sol. Energy 113, 57–62 (2015).
 5.
Assaraf, R. & Caffarel, M. Zerovariance principle for Monte Carlo algorithms. Phys. Rev. Lett. 83, 4682 (1999).
 6.
Metropolis, N., Rosenbluth, A. W., Rosenbluth, M. N., Teller, A. H. & Teller, E. Equation of state calculations by fast computing machines. J. Chem. Phys. 21, 1087–1092 (1953).
 7.
Hammersley, J. M. & Handscomb, D. C. Monte Carlo Methods. (Springer, Netherlands, 1964).
 8.
Roger, M., Blanco, S., El Hafi, M. & Fournier, R. Monte Carlo Estimates of DomainDeformation Sensitivities. Phys. Rev. Lett. 95, 180601 (2005).
 9.
Curtiss, J. H. ‘Monte Carlo’ Methods for the Iteration of Linear Operators. J. Math. Phys. 32, 209–232 (1953).
 10.
Kalos, M. H. & Whitlock, P. A. Monte Carlo Methods. second ed., (Wiley–VCH, Weinheim, 2008).
 11.
Chatterjee, K., Roadcap, J. R. & Singh, S. A new Green’s function Monte Carlo algorithm for the solution of the twodimensional nonlinear Poisson–Boltzmann equation: Application to the modeling of the communication breakdown problem in space vehicles during reentry. J. Comput. Phys. 276, 479–485 (2014).
 12.
Vajargah, B. F. & Moradi, M. Monte Carlo algorithms for solving Fredholm integral equations and Fredholm differential integral equations. Appl. Math. Sci. 1, 463–470 (2007).
 13.
Rasulov, A., Raimova, G. & Mascagni, M. Monte Carlo solution of Cauchy problem for a nonlinear parabolic equation. Math. Comput. Simulation 80, 1118–1123 (2008).
 14.
Gobet, E. MonteCarlo Methods and Stochastic Processes: From Linear to Nonlinear (CRC Press, 2016).
 15.
Skorokhod, A. V. Branching diffusion processes. Theory Probab. Appl. 9, 445–449 (1964).
 16.
Hong, L. J. & Juneja, S. Estimating the Mean of a Nonlinear Function of Conditional Expectation. Proceedings of the 2009 Winter Simulation Conference, Austin, Texas, 1223–1236 (2009).
 17.
Charon, J. et al. Monte Carlo implementation of Schiff’s approximation for estimating radiative properties of homogeneous, simpleshaped and optically soft particles: Application to photosynthetic microorganisms. J. Quant. Spectrosc. Radiat. Transf. 172, 3–23 (2016).
 18.
Cornet, J. F. Calculation of optimal design and ideal productivities of volumetrically lightened photobioreactors using the constructal approach. Chem. Eng. Sci. 65, 985–998 (2010).
 19.
Galtier, M. et al. Radiative transfer and spectroscopic databases: A linesampling Monte Carlo approach. J. Quant. Spectrosc. Radiat. Transf. 172, 83–97 (2016).
 20.
Dauchet, J. et al. Calculation of the radiative properties of photosynthetic microorganisms. J. Quant. Spectrosc. Radiat. Transfer. (2015).
 21.
Dauchet, J., Cornet, J.F., Gros, F., Roudet, M. & Dussap, C.G. Chapter One – Photobioreactor Modeling and Radiative Transfer Analysis for Engineering Purposes. Adv. Chem. Eng. 48, 1–106 (2016).
 22.
Kac, M. On some connections between probability theory and differential and integral equations. Proc. Second Berkeley Symp. Math. Statistics Probab. 189 (1951).
 23.
Corney, J. F. & Drummond, P. D. Gaussian quantum Monte Carlo methods for fermions and bosons. Phys. Rev. Lett. 93, 2–5 (2004).
 24.
Pharr, M. & Humphreys, G. Physically Based Rendering: from theory to implementation (Elsevier, 2010).
 25.
Case, K. M. Transfer problems and the reciprocity principle. Rev. Mod. Phys. 29, 651 (1957).
 26.
Collins, D. G., Blättner, W. G., Wells, M. B. & Horak, H. G. Backward Monte Carlo calculations of the polarization characteristics of the radiation emerging from sphericalshell atmospheres. Appl. Opt. 11, 2684–2696 (1972).
 27.
Galtier, M. et al. Integral formulation of nullcollision Monte Carlo algorithms. J. Quant. Spectrosc. Radiat. Transf. 125, 57–68 (2013).
 28.
Wagner, W. Stochastic particle methods and approximation of the Boltzmann equation. Math. Comput. Simul. 38, 211–216 (1995).
 29.
Rjasanow, S. A Stochastic Weighted Particle Method for the Boltzmann Equation. J. Comput. Phys. 124, 243–253 (1996).
 30.
Rjasanow, S. & Wagner, W. Simulation of rare events by the stochastic weighted particle method for the Boltzmann equation. Math. Comput. Model. 33, 907–926 (2001).
 31.
Krook, M. & Wu, T. T. Formation of Maxwellian Tails. Phys. Rev. Lett. 36, 1107–1109 (1976).
 32.
Krook, M. & Wu, T. T. Exact solutions of the Boltzmann equation. Phys. Fluids 20, 1589–1595 (1977).
 33.
Boltzmann, L. In Wissenschaftliche Abhandlungen, edited by Hasenorl, F. Vol. II, p. 83 (J.A. Barth, Leipzig, 1909).
 34.
GuéryOdelin, D., Muga, J. G., RuizMontero, M. J. & Trizac, E. Nonequilibrium Solutions of the Boltzmann Equation under the Action of an External Force. Phys. Rev. Lett. 112, 180602 (2014).
Acknowledgements
The authors express their deep gratitude to Igor Roffiac for fruitful discussions on the Monte Carlo method. This work was sponsored by the French National Centre for Scientific Research (CNRS) through the PEPSJCJC OPTISOL_Mu program, by the French Agence Nationale de la Recherche (ANR) under grant ANR16CE010010 (project HighTune), by the Region Occitanie under grant CLE2016EDStar and by the French government researchprogram “Investissements d’avenir” through the LABEXs ANR10LABX1601 IMobS3 and ANR10LABX2201 SOLSTICE and the ATS program ALGUE of IDEX ANR11IDEX02 UNITI.
Author information
Affiliations
Contributions
All authors contributed extensively to the theoretical developments presented in this paper. Each author contributed to the practical applications according to his or her scientific expertise: J.D., M.E.H., V.E., R.F. and M.G. in Atmospheric sciences, J.D., S.B., C.Ca., M.E.H., V.E., O.F., R.F., M.G. and M.R. in Radiative Transfer, J.D., S.B., R.F., J.G., A.K. and S.W. in Complex Systems in Biology, J.D., J.J.B., S.B., C.Ca., M.E.H., V.E., O.F. and R.F. in Solar Energy, J.D., S.B., J.C., M.E.H. and R.F. in Electromagnetism and quantum mechanics, J.D., S.B., M.E.H., R.F., J.G., B.P. and G.T. in Statistical Physics. C.Co., V.E., V.F. and B.P. (www.mesostar.com) performed the numerical implementations. J.D., S.B., R.F. and J.G. wrote the paper.
Corresponding author
Ethics declarations
Competing Interests
The authors declare no competing interests.
Additional information
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Electronic supplementary material
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Dauchet, J., Bezian, J., Blanco, S. et al. Addressing nonlinearities in Monte Carlo. Sci Rep 8, 13302 (2018). https://doi.org/10.1038/s41598018315744
Received:
Accepted:
Published:
Comments
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.