Taming hyperparameter tuning in continuous normalizing flows using the JKO scheme

A normalizing flow (NF) is a mapping that transforms a chosen probability distribution to a normal distribution. Such flows are a common technique used for data generation and density estimation in machine learning and data science. The density estimate obtained with a NF requires a change of variables formula that involves the computation of the Jacobian determinant of the NF transformation. In order to tractably compute this determinant, continuous normalizing flows (CNF) estimate the mapping and its Jacobian determinant using a neural ODE. Optimal transport (OT) theory has been successfully used to assist in finding CNFs by formulating them as OT problems with a soft penalty for enforcing the standard normal distribution as a target measure. A drawback of OT-based CNFs is the addition of a hyperparameter, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\alpha $$\end{document}α, that controls the strength of the soft penalty and requires significant tuning. We present JKO-Flow, an algorithm to solve OT-based CNF without the need of tuning \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\alpha $$\end{document}α. This is achieved by integrating the OT CNF framework into a Wasserstein gradient flow framework, also known as the JKO scheme. Instead of tuning \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\alpha $$\end{document}α, we repeatedly solve the optimization problem for a fixed \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\alpha $$\end{document}α effectively performing a JKO update with a time-step \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\alpha $$\end{document}α. Hence we obtain a ”divide and conquer” algorithm by repeatedly solving simpler problems instead of solving a potentially harder problem with large \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\alpha $$\end{document}α.


Introduction
A normalizing flow (NF) is a type of generative modeling technique that has shown great promise in applications arising in physics [40,7,10] as a general framework to construct probability densities for continuous random variables in high-dimensional spaces [47,33,52].An NF provides a C 1diffeomorphism f (i.e., a normalizing transformation) that transforms the density ρ 0 of an initial distribution P 0 to the density ρ 1 of the standard multivariate normal distribution P 1 -hence the term "normalizing."Given such mapping f , the density ρ 0 can be recovered from the Gaussian density via the change of variables formula, where J f ∈ R d×d is the Jacobian of f .Moreover, one can obtain samples with density ρ 0 by pushing forward Gaussian samples via f −1 .Remark 1.Throughout the paper we slightly abuse the notation, using the same notation for probability distributions and their density functions.Additionally, given a probability distribution P 0 on R d and a measurable mapping f : R d → R d , we define the pushforward distribution of P 0 through f as (f P 0 )(B) = P 0 (f −1 (B)) for all Borel measurable B ⊆ R d [67].
There are two classes of normalizing flows: finite and continuous.A finite flow is defined as a composition of a finite number of C 1 -diffeomorphisms: To make finite flows computationally tractable, each f i is chosen to have some regularity properties such as a Jacobian with a tractable determinant; for example, J fi may have a triangular structure [48,5,73].
On the other hand, continuous normalizing flows (CNFs) estimate f using a neural ODE of the form [11]: where θ are the parameters of the neural ODE.In this case, f is defined as f (x) = z(x, T ) (for simplicity, we remove the dependence of z on θ).One of the main advantages of CNFs is that we can tractably estimate the log-determinant of the Jacobian using Jacobi's identity, which is commonly used in fluid mechanics (see, e.g., [67, p.114]): This is computationally appealing as one can replace the expensive determinant calculation by a more tractable trace computation of ∇ z v θ (z(x, t), t).Importantly, no restrictions on ∇ z v θ (z(x, t), t) (e.g., diagonal or triangular structure) are needed; thus, these Jacobians are also referred to as "free-form Jacobians" [23].
The goal in training a CNF is to find parameters, θ, such that f = z( •, T ) leads to a good approximation of ρ 1 or, assuming f is invertible, the pushforward of ρ 1 through f −1 is a good approximation of ρ 0 [52,48,47,23].Indeed, let ρ 0 be this pushforward density obtained with a CNF f ; that is, ρ 0 = f −1 ρ 1 .We then minimize the Kullback-Leibler (KL) divergence from ρ 0 to ρ 0 given by where (x, T ) = log | det ∇z(x, T )|.Dropping the θ-independent term log ρ 0 and using (2) and (3), this previous optimization problem reduces to the minimization problem subject to ODE constraints The ODE (5) might be stiff for certain values of θ, leading to extremely long computation times.Indeed, the dependence of v on θ is highly nonlinear and might generate vector fields that lead to highly oscillatory trajectories with complex geometry.Some recent work leverages optimal transport theory to find the CNF [19,45].In particular, a kinetic energy regularization term (among others) is added to the loss to "encourage straight trajectories" z(x, t).That is, the flow is trained by minimizing the following instead of (4): subject to (5).The key insight in [19,45] is that (4) is an example of a degenerate OT problem with a soft terminal penalty and without a transportation cost.The first objective term in (6) given by the time integral is the transportation cost term, whereas α is a hyperparameter that balances the soft penalty and the transportation cost.Including this cost makes the problem well-posed by forcing the solution to be unique [68].Additionally, it enforces straight trajectories so that ( 5) is not stiff.Indeed [19,45] empirically demonstrate that including optimal transport theory leads to faster and more stable training of CNFs.Intuitively, we minimize the KL divergence and the arclength of the trajectories.
Although including optimal transport theory into CNFs has been very successful [45,19,71,74], there are two key challenges that render them difficult to train.First, estimating the log-determinant in (4) via the trace in ( 5) is still computationally taxing and commonly used methods rely on stochastic approximations [23,19], which adds extra error.Second, including the kinetic energy regularization requires tuning of the hyperparameter α.Indeed, if α is chosen too small in (6), then the kinetic regularization term dominates the training process, and the optimal solution consists of not moving, i.e., f (x) = x.On the other hand, if α is chosen too large, we return to the original setting where the problem is ill-posed, i.e., there are infinitely many solutions.Finally, finding an "optimal" α is problem dependent and requires tuning on a cases-by-case basis.

Our Contribution
We present JKO-Flow, an optimal transport-based algorithm for training CNFs without the need to tune the hyperparameter α in (6).Our approach also leverages fast numerical methods for exact trace estimation from the recently developed optimal transport flow (OT-Flow) [45,55].The key idea is to integrate the OT-Flow approach into a Wasserstein gradient flow framework, also known as the Jordan, Kinderlehrer, and Otto (JKO) scheme [30].Rather than tuning the hyperparameter α (commonly done using a grid search), the idea is to simply pick any α and solve a sequence of "easier" OT problems that gradually approach the target distribution.Each solve is precisely a gradient descent in the space of distributions, a Wasserstein gradient descent, and the scheme provably converges to the desired distribution for all α > 0 [56].Our experiments show that our proposed approach is effective in generating higher quality samples (and density estimates) and also allows us to reduce the number of parameters required to estimate the desired flow.
Our strategy is reminiscent of debiasing techniques commonly used in inverse problems.Indeed, the transportation cost that serves as a regularizer in (6) introduces a bias -the smaller α the more bias is introduced (see, e.g., [66]), so good choices of α tend to be larger.One way of removing the bias and the necessity of tuning the regularization strength is to perform a sequence of Bregman iterations [46,9] also known as nonlinear proximal steps.Hence our approach reduces to debiasing via Bregman or proximal steps in the Wasserstein space.In the context of CNF training, Bregman iterations are advantageous due to the flexibility of the choice for α.Indeed, the resulting loss function is non-convex and its optimization tends to get harder for large α.Thus, instead of solving one harder problem we solve several "easier" problems.

Optimal Transport Background and Connections to CNFs
Denote by P 2 (R d ) the space of Borel probability measures on R d with finite second-order moments, and let ρ 0 , ρ 1 ∈ P 2 (R d ).The quadratic optimal transportation (OT) problem (which also defines the Wasserstain metric W 2 ) is then formulated as x − y 2 dπ(x, y), where Γ(ρ 0 , ρ 1 ) is the set of probability measures π ∈ P(R 2d ) with fixed x and y-marginals densities ρ 0 and ρ 1 , respectively.Hence the cost of transporting a unit mass from x to y is x − y 2 , and one attempts to transport ρ 0 to ρ 1 as cheaply as possible.In (7), π represents a transportation plan, and π(x, y) is the mass being transported from x to y.One can prove that (P 2 (R d ), W 2 ) is a complete separable metric space [67].OT has recently become a very active research area in PDE, geometry, functional inequalities, economics, data science and elsewhere partly due to equipping the space of probability measures with a (Riemannian) metric [67,68,50,60].
As observed in prior works, there are many similarities between OT and NFs [45,19,72,74].This connection becomes more transparent when considering the dynamic formulation of (7).More precisely, the Benamou-Brenier formulation of the OT problem is given by [6]: Hence, the OT problem can be formulated as a problem of flowing ρ 0 to ρ 1 with a velocity field v that achieves minimal kinetic energy.The optimal velocity field v has several appealing properties.First, particles induced by the optimal flow v travel in straight lines.Second, particles travel with constant speed.Moreover, under suitable conditions on ρ 0 and ρ 1 , the optimal velocity field is unique [67].
Given a velocity field v, denote by z(x, t) the solution of the ODE Then, under suitable regularity conditions, we have that the solution of the continuity equation is given by ρ(•, t) = z(•, t) ρ 0 .Thus the optimization problem in ( 8) can be written as This previous problem is very similar to (4) with the following differences: • the objective function in (4) does not have the kinetic energy of trajectories, • the terminal constraint is imposed as a soft constraint in ( 4) and as a hard constraint in (9), and • v in ( 4) is θ-dependent, whereas the formulation in ( 9) is in the non-parametric regime.
So the NF (4) can be thought of as an approximation to a degenerate transportation problem that lacks transportation cost.Based on this insight one can regularize (4) by adding the transportation cost and arrive at (6) or some closely related version of it [45,19,72,74].It has been observed that the transportation cost (kinetic energy) regularization significantly improves the training of NFs.

JKO-Flow: Wasserstein Gradient Flows for CNFs
While the OT-based formulation of CNFs in (6) has been found successful in some applications [45,19,72,74], a key difficulty arises in choosing how to balance the kinetic energy term and the KL-divergence, i.e., on selecting α.This difficulty is typical in problems where the constraints are imposed in a soft fashion.Standard training of CNFs typically involves tuning for a "large but hopefully stable enough" step size α so that the KL divergence term is sufficiently small after training.
To this end, we propose an approach that avoids the need to tune α by using the fact that the solution to ( 6) is an approximation to a backward Euler (or proximal point) algorithm when discretizing the Wasserstein gradient flow using the Jordan-Kinderlehrer-Otto (JKO) scheme [30].The seminal work in [30] provides a gradient flow structure of the Fokker-Planck equation using an implicit time discretization.That is, given α > 0, density at k th iteration, ρ (k) , and terminal density ρ 1 , one finds for k = 0, 1, . .., and ρ (0) = ρ 0 .Here, α takes the role of a step size when applying a proximal point method to the KL divergence using the Wasserstein-2 metric, and {ρ (k) } provably converges to Algorithm 1 Proposed Algorithm Solve for θ k using samples by solving (11) 5: Update distribution of samples using v θ k 6: end for 7: Output: saved weights θ 1 , . . ., θ K ρ 1 [30,56].Hence, repeatedly solving (9) with the KL penalty acting as a soft constraint yields an arbitrarily accurate approximation of ρ 1 .In the parametric regime each iteration takes the form Thus we solve a sequence of problems (6), where the initial density of the current subproblem is given by the pushforward of the density generated in the previous subproblem.Importantly, our proposed approach does not require tuning α.Instead, we solve a sequence of subproblems that is guaranteed to converge to ρ 1 [30] prior to the neural network parameterization; see Alg. 1.While our proposed methodology can be used in tandem with any algorithm used to solve (11), an important numerical aspect in our approach is to leverage fast computational methods that use exact trace estimation in (5); this approach is called OT-Flow [45].Consequently, we avoid the use of stochastic approximation methods for the trace, e.g., Hutchinson's estimator [4,29,66], as is typically done in CNF methods [23,19].A surprising result of our proposed method is that it empirically shows improved performance even with fewer number of parameters (see Fig. 3).

Related Works
Density Estimation.One of the main advantages of NFs over other generative models is that they provide density estimates of probability distributions using (1).That is, we do not need to apply a separate density estimation technique after generating samples from a distribution, e.g., as in GANs [22].Multivariate density estimation is a fundamental problem in statistics [62,61], High Energy Physics (HEP) [14] and in other fields of science dealing with multivariate data.For instance, particle physicists in HEP study possible distributions from a set of high energy data.Another application of density estimation is in confidence level calculations of particles in Higgs searches at Large Electron Positron Colliders (LEP) [13] and discriminant methods used in the search for new particles [14].
Finite Flows.Finite normalizing flows [64,52,47,33] use a composition of discrete transformations, where specific architectures are chosen to allow for efficient inverse and Jacobian determinant computations.NICE [15], RealNVP [16], IAF [32], and MAF [48] use either autoregressive or coupling flows where the Jacobian is triangular, so the Jacobian determinant can be tractably computed.GLOW [31] expands upon RealNVP by introducing an additional invertible convolution step.These flows are based on either coupling layers or autoregressive transformations, whose tractable invertibility allows for density evaluation and generative sampling.Neural Spline Flows [17] use splines instead of the coupling layers used in GLOW and RealNVP.Using monotonic neural networks, NAF [28] require positivity of the weights, which UMNN [69] circumvent this requirement by parameterizing the Jacobian and then integrating numerically.
Continuous and Optimal Transport-based Flows.Modeling flows with differential equations is a natural and commonly used method [63,70,39,57,54,27].In particular, CNFs model their flow via a neural ordinary differential equation [11,12,23].Among the most well-known CNFs are FFJORD [23], which estimates the determinant of the Jacobian by accumulating its trace along the trajectories, and the trace is estimated using Hutchinson's estimator [4,29,66].To promote straight trajectories, RNODE [19] regularizes FFJORD with a transport cost L(x, T ).RNODE also includes the Frobenius norm of the Jacobian ∇v 2 F to stabilize training.The trace and the Frobenius norm are estimated using a stochastic estimator and report speedup by a factor of 2.8.
Monge-Ampère Flows [74] and Potential Flow Generators [71] similarly draw from OT theory but parameterize a potential function instead of the dynamics directly.OT is also used in other generative Figure 1: Checkerboard dataset: Generated samples of ρ 0 using the standard one-shot approach (top row).Generated using our proposed JKO-Flow using five iterations (bottom row).Here, we use α = 1, 5, 10, 50.JKO-Flow returns consistent results regardless of the value of α.
Wasserstein Gradient Flows.Our proposed method is most closely related to [18], which also employs a JKO-based scheme to perform generative modeling.But a key difference is that [18] reformulates the KL-divergence as an optimization over difference of expectations (See [18, Prop. 3.1]); this makes their approach akin to GANs, where the density cannot be obtained without using a separate density estimation technique.Our proposed method is also closely related to methods that use input-convex CNNs [37,8,2].[37] focuses on the special case with KL divergence as objective function.[2] solve a sequence of subproblems different from the fluid flow formulation presented in (11).They also require an end-to-end training scheme that backpropagates to the initial distribution; this can become a computational burden when the number of time discretizations is large.[8] utilizes a JKO-based scheme to approximate a population dynamics given an observed trajectory and focus on applications in computational biology.Other related works include natural gradient methods [41] and implicit schemes based on the Wasserstein-1 distance [26].

Numerical Experiments
We demonstrate the effectiveness of our proposed JKO-Flow on a series of synthetic and real-world datasets.As previously mentioned, we compute each update in (10) by solving (6) using the OT-Flow solver [45], which leverages fast and exact trace computations.We also use the same architecture provided in [45].Henceforth, we shall also call the traditional CNF approach the "single-shot" approach.
Maximum Mean Discrepancy Metric (MMD).Our density estimation problem requires approximating a density ρ 0 by finding a transformation f such that f −1 ρ 1 has density ρ 0 close to ρ 0 where ρ 1 is the standard Gaussian.However, ρ 0 is not known in real-world density estimation scenarios, such as in physics applications, all we have are samples X = {x i } n i=1 from the unknown distribution.Consequently, we use the observed samples X and samples X = { x j } m j=1 , x j = f −1 (q j ), generated  by the CNF and samples Q = {q j } m j=1 from ρ 1 to determine if their corresponding distributions are close in some sense.To measure the discrepancy we use a particular integral probability metric [75,51,38] known as maximum mean discrepancy (MMD) defined as follows [24]: Let x and y be random vectors in R d with distributions µ x and µ y , respectively, and let H be a reproducing kernel Hilbert space of functions on R d with Gaussian kernel (see [49] for an introduction) Then the MMD of µ x and µ y is given by It can be shown that MMD H defines a metric on the class of probability measures on R d [24,21].The squared-MMD can be written in terms of the kernel as follows: ), where x, x are iid µ x independent of y, y which are iid µ y .An unbiased estimate of the squared-MMD based on the samples X and X defined above is given by [24]: Note that the MMD is not used for algorithmic training of the CNF, it is only used to compare the densities ρ 0 and ρ 0 based on the samples X and X.
Synthetic 2D Data Set.We begin by testing our method on seven two-dimensional (2D) benchmark datasets for density estimation algorithms commonly used in machine learning [23,69]; see Fig. 2. We generate results with JKO-Flow for different values of α and for different number of iterations.
We use α = 1, 5, 10, and 50, and for each α we use the single shot approach k = 1 and JKO-Flow with k = 5 iterations from (10).Note that in CNFs, we are interested in estimating the density (and generating samples) from ρ 0 ; consequently, once we have the optimal weights θ (1) , θ (2) , . . ., θ (5) , we must "flow backwards" starting with samples from the normal distribution ρ 1 .Fig. 1 shows that JKO-Flow outperforms the single shot approach for different values of α.In particular, the performance for the single shot approach varies drastically for different values of α, with α = 1 being an order of magnitude higher in MMD than α = 5.On the other hand, JKO-Flow is consistent Figure 3: Checkerboard dataset: Generated samples of ρ 0 using the standard single shot approach (top row).Generated samples using our proposed JKO-Flow using five iterations (bottom row).Here, we fix α = 5 and vary the network width m = 3, 4, 5, 8, and 16.JKO-Flow performs competitively even with fewer parameters.
regardless of the value of α.As previously mentioned, this is expected as JKO-Flow is a proximal point algorithm that converges regardless of the step size α.In this case, five JKO-Flow iterations are enough to obtain this consistency.Additional plots and hyperparameter setups for different benchmark datasets with similar performance results are shown in the Appendix.Tab. 1 summarizes the comparison between the single shot and JKO-Flow on all synthetic 2D datasets for different values of α.We also show an illustration of all the datasets, estimated densities, and generated samples with JKO-Flow in Fig. 2.
Varying Network Size.In addition to obtaining consistent results for different values of α, we also empirically observe that JKO-Flow outperforms the single shot approach for different numbers of network parameters, i.e., network size.We illustrate this in Fig. 3.This is also intuitive as we reformulate the problem of finding a single "difficult" optimal transportation problem as a sequence of "smaller and easier" OT problems.In this setup, we vary the width of a two-layer ResNet [25].In particular, we choose the widths to be m = 3, 4, 5, 8, and 16.These correspond to 40, 53, 68, 125, and 365 parameters.The hyperparameter α is chosen to be the best performing value for each synthetic dataset.All datasets vary m for fixed α = 5, except the 2 Spiral dataset, which uses α = 50; we chose these α values as they performed the best in the fixed m experiments.Similar results are also shown for the remaining synthetic datasets in the appendix.Tab. 2 summarizes the comparison between the single shot and JKO-Flow on all synthetic 2D datasets.

Density Estimation on a Physics Dataset
We train JKO-Flow on the 43-dimensional MINIBOONE dataset which is a high-dimensional, real-world physics dataset used as benchmark for highdimensional density estimation algorithms in physics [53].For this physics problem, our method is trained for α = 0.5, 1, 5, 10, 50 and using 10 JKO-Flow iterations.Fig 4 shows generated samples with JKO-Flow and the standard single-shot approach for α = 5.Since MINIBOONE is a high-dimensional dataset, we follow [45] and plot two-dimensional slices.JKO-Flow generates better quality samples.Similar experiments for α = 1, 10, and 50 are shown in Figs 17,18, and 19 in the Appendix.Tab. 3 summarizes the results for all values of α.Note that we compute MMD values for all the dimensions as well as 2D slices; this is because we only have limited data ( 3000 testing samples) and the 2D slice MMD give a better indication on the improvement of the generated samples.Results show that the MMD is consistent across all α values for JKO-Flow.We also show the convergence (in MMD 2 ) of the miniboone dataset across each 2D slice in Fig. 5.As expected,

Samples
Single Shot JKO-Flow Figure 4: Generated samples for the 43-dimensional MINIBOONE dataset using the single shot approach and JKO-Flow with 10 iterations.To visualize the dataset, we show 2-dimensional slices.
We show the forward flow f (x) where x ∼ ρ 0 and the genereated samples f −1 (y) where y ∼ ρ 1 .

Conclusion
We propose a new approach we call JKO-Flow to train OT-regularized CNFs without having to tune the regularization parameter α.The key idea is to embed an underlying OT-based CNF solver into a Wasserstein gradient flow framework, also known as the JKO scheme; this approach makes the regularization parameter act as a "time" variable.Thus, instead of tuning α, we repeatedly solve proximal updates for a fixed (time variable) α.In our setting, we choose OT-Flow [45], which leverages exact trace estimation for fast CNF training.Our numerical experiments show that JKO-Flow leads to improved performance over the traditional approach.Moreover, JKO-Flow achieves similar results regardless of the choice of α.We also empirically observe improved performance when varying the size of the neural network.Future work will investigate JKO-Flow on similar problems such as deep learning-based methods for optimal control [20,43,42] and mean field games [35,55,1].Figure 16: Moons dataset: Generated samples of ρ0 using the standard single shot approach (top row).Generated samples using our proposed JKO-Flow using five iterations (bottom row).Here, we fix α = 5 and vary the network width m = 3, 4, 5, 8, and 16.JKO-Flow performs competitively even with lower number of parameters.

Figure 2 :
Figure 2: Density estimation on 2D toy problems using five JKO-Flow iterations.Top: samples from the unknown distribution ρ 0 .Middle: density estimate for ρ 0 computed by inverting the flow through the five iterations of JKO-Flow from ρ 1 via (2).Bottom: samples generated by inverse JKO-Flow through five iterations where y has density ρ 1 .

Figure 5 :
Figure 5: MMD 2 per iteration when using JKO-Flow after training.MMD 2 values vs. JKO-Flow iteration for 2-dimensional slice (dimensions 16-17 on the left and dimensions 28-29 on the right) of the MINIBOONE dataset.JKO-Flow achieves same accuracy regardless of the value of α.

Table 2 :
Synthetic 2D Data: Network width comparison for 1 and 5 iterations given a fixed, best performing α.JKO-Flow performs better than the single shot approach for different network sizes.

Table 3 :
MINIBOONE: Comparison of Single Shot and JKO-Flow for different values of α.