Machine-guided path sampling to discover mechanisms of molecular self-organization

Molecular self-organization driven by concerted many-body interactions produces the ordered structures that define both inanimate and living matter. Here we present an autonomous path sampling algorithm that integrates deep learning and transition path theory to discover the mechanism of molecular self-organization phenomena. The algorithm uses the outcome of newly initiated trajectories to construct, validate and—if needed—update quantitative mechanistic models. Closing the learning cycle, the models guide the sampling to enhance the sampling of rare assembly events. Symbolic regression condenses the learned mechanism into a human-interpretable form in terms of relevant physical observables. Applied to ion association in solution, gas-hydrate crystal formation, polymer folding and membrane-protein assembly, we capture the many-body solvent motions governing the assembly process, identify the variables of classical nucleation theory, uncover the folding mechanism at different levels of resolution and reveal competing assembly pathways. The mechanistic descriptions are transferable across thermodynamic states and chemical space.


INTRODUCTION
Understanding how generic yet subtly orchestrated interactions cooperate in the formation of complex structures is the key to steering molecular self-assembly 1,2 .Molecular dynamics (MD) simulations promise us a detailed and unbiased view of self-organization processes.Being based on accurate physical models, MD can reveal complex molecular reorganizations in a computer experiment with atomic resolution 3 .However, the high computational cost of MD simulations can be prohibitive.Most collective self-organization processes are rare events that occur on time scales many orders of magnitude longer than the fast molecular motions limiting the MD integration step.The system spends most of the time in metastable states and reorganizes during infrequent and rapid stochastic transitions between states.The transition paths (TPs) are the very special "reactive" trajectory segments that capture the reorganization process.Learning molecular mechanisms from simulations requires computational power to be focused on sampling TPs 4 and distilling quantitative models out of them 5 .Due to the high dimensionality of configuration space, both sampling and information extraction are exceedingly challenging in practice.Here we address both challenges at once with an artificial intelligence (AI) agent that simultaneously builds quantitative mechanistic models of complex molecular events, validates the models on the fly, and uses them to accelerate the sampling by orders of magnitude compared to regular MD.

Reinforcement learning of molecular mechanisms
Statistical mechanics provides a general framework to obtain low-dimensional mechanistic models of selforganization events.Here we focus on systems that reorganize between two states A (assembled) and B (disassembled), but generalizing to an arbitrary number of states is straightforward.Each TP connecting the two states contains a sequence of snapshots capturing the system during its reorganization.Consequently, the transition path ensemble (TPE) is the mechanism at the highest resolution.Since the transition is effectively stochastic, quantifying its mechanism requires a probabilistic framework.We define the committor p S (x) as the probability that a trajectory starting from a point x in configuration space enters state S first, with S = A or B, respectively, and p A (x) + p B (x) = 1 for ergodic dynamics.As ideal reaction coordinates 6,7 , the committors p A and p B = 1 − p A report on the progress of the reaction A B and predict the trajectory fate in a Markovian way 8,9 .The TPE and the committor jointly quantify the mechanism.
Sampling the TPE and learning the committor function p B (x) are two outstanding and intrinsically connected challenges.Given that TPs are exceedingly rare in a very high-dimensional space, an uninformed search is futile.However, TPs converge near transition states 9 , where the trajectory's evolution is yet uncommitted and p A (x) = p B (x) = 1 /2.For Markovian dynamics the probability for a trajectory passing through x to be a TP satisfies P (TP|x) = 2p B (x)(1 − p B (x)), that is, the committor determines the probability of sampling a TP 10 .Therefore, learning the committor facilitates TP sampling and vice versa.The challenges of information extraction and sampling are thus deeply intertwined.We thus tackle them simultaneously with the help of reinforcement learning 11,12 .
We designed an AI agent that learns how to sample the TPE of rare events in complex many-body systems and at the same time learns their committor function by repeatedly running virtual experiments (Fig. 1a).In each experiment, the AI agent selects a point x from which to shoot trajectories-propagated according to the dynamics of the physical model-to generate TPs.After repeated shots from different points x, the agent compares the number of generated TPs with the expected number based on its knowledge of the committor.Only if the prediction is poor, the AI agent retrains the model on the outcome of all virtual experiments, which prevents over-fitting.As the agent becomes better at predicting the outcome of the virtual experiment, it becomes more efficient at sampling TPs by choosing initial points near transition states, i.e., according to P (TP|x).
The AI agent learns from its repeated attempts by using deep learning in a self-consistent way.Here, we model the committor p B (x) = 1/(1 + e −q(x|w) ) with a neural network 13 q(x|w) of weights w.Note that x interchangeably denotes a vector of features and the configuration represented by this vector.In each attempt to generate a TP, the agent propagates two trajectories, one forward and one backward in time, by running MD simulations 4 .In case of success, one trajectory first enters state A and the other B, forming a new TP.However, the agent learns from both successes and failures of this Bernoulli coin-toss process.The negative log-likelihood 5 of k attempts leads to the loss function l(w|θ) = k i=1 log(1 + e siq(xi|w) ), where s i = 1 if trajectory i initiated from x i enters A first, and s i = −1 if it enters B first.The training set θ contains shooting points x i and outcomes s i .By training the network q(x|w) to minimize the loss l(w|θ), the agent obtains a maximum likelihood estimate of the committor 5 .
We use machine learning also to condense the learned molecular mechanism into a human-interpretable form (Fig. 1a).The trained network is efficient to evaluate, differentiable, and enables sophisticated analysis methods 14 .To provide physical insight, symbolic regression 15 generates simple models that quantitatively reproduce the committor encoded in the network.First, a sensitivity analysis of the trained network identifies a small subset z ⊂ x of all input coordinates x that controls the quality of the prediction by the network.Then, symbolic regression distills mathematical expressions q sr (z) ≈ q(x|w) by using a genetic algorithm that searches both functional and parameter spaces with loss l(w|θ) and training set θ.

AI discovers many-body solvent coordinate in ion assembly
The formation of ion pairs in water is a paradigmatic assembly process controlled by many-body interactions in the molecular environment-the solvent, in this caseand a model of chemical reactions in condensed phase.Even though MD can efficiently simulate the process, the collective reorganization of water molecules challenges the formulation of quantitative mechanistic models to this day 16 (Fig. 1b).
Our AI agent quickly learned how to sample the formation of lithium (Li + ) and chloride (Cl − ) ion pairs in water (Fig. 1b, c).As input, the network uses the interionic distance r LiCl and 220 molecular features x 1 , . . ., x 220 that describe the angular arrangement of water oxygen and hydrogen atoms at a specific distance from each ion 17 (Fig. 1e, f).These coordinates provide a general representation of the system that is invariant with respect to physical symmetries and exchange of atom labels.After the first 500 iterations, the predicted and observed numbers of TPs agree (Fig. 1c).We further validated the learned committor function by checking its predictions against independent simulations.From 763 configurations not used in AI training, we initiated 500 independent simulations each and estimated the sampled committor p B as the fraction of trajectories first entering the unbound state.Predicted and sampled committors are in quantitative agreement (Fig. 1d).
The input importance analysis of the trained network reveals the critical role played by solvent rearrangement.As the most important of the 220 molecular features used to describe the solvent, x 12 quantifies oxygen anisotropy at a radial distance of 0.25 nm from Li + (Fig. 1f).For successful ion-pair assembly, these inner-shell water molecules need to open up space for the incoming Cl − .The importance of inner-shell water rearrangement is consistent with a visual analysis that highlights atoms in a TP according to their contribution to the committor gradient (Fig. 1b).
Symbolic regression provides quantitative and inter- Water is shown as sticks, Li + as a small sphere and Cl − as a large sphere.Atoms are colored according to their contribution to the reaction progress from low (blue) to high (red), as quantified by their contribution to the gradient of the reaction coordinate q(x|w).c, Self-consistency.Cumulative counts of the generated (blue line) and expected (orange dashed line) number of transition events.The green line shows the cumulative difference between the observed and expected counts.The inset shows a zoom-in on the first 1000 iterations.d, Validation of the learned committor.Cross-correlation between the committor predicted by the trained network and the committor obtained by repeated sampling from molecular configurations on which the AI did not train.The average of the sampled committors (blue line) and their standard deviation (orange shaded) are calculated by binning along the predicted committor.e, Input relevance for all 221 input coordinates used for deep learning.f, Schematic depiction of the solvent reorientation around the Li + ion as reported by the most relevant symmetry function x12.g, Simplified assembly models q0 and q1 obtained by symbolic regression at strict and gentle regularization, respectively.Scatter plots show independent validations of their accuracy.h, Transferability of the learned committor.Cross-correlation between sampled committors for 1M LiCl solution and predictions of committor trained on data for a single LiCl ion pair.pretable models of the ion-pair assembly mechanism.
A large complexity penalty produces a simple model q 0 (r LiCl ) as a function of the inter-ionic distance onlythe standard choice to study this process (Fig. 1g).In a validation test, we found this model to be predictive only for large inter-ionic distances.Close to the bound state, where the detailed geometry of the water solvent controls the process, the distance-only model fails.A more complex model, q 1 (r LiCl , x 8 , x 9 , x 12 ), that integrates the most critical solvent coordinates is accurate for all distances (Fig. 1g).
Counter to a common concern for AI models, the learned committor is transferable to a similar but not identical system.We validated the learned committor for 724 configurations drawn from an additional simulation of a 1M concentrated LiCl solution.Even though AI trained on a system containing a single ion pair, it correctly predicted committors for a system on which it never trained (Fig. 1h).

AI discovers variables of classical nucleation theory for gas-hydrate crystal formation
At low temperature and high pressure, a liquid mixture of water and methane organizes into a gas hydrate, an ice-like solid 18 .In this phase transition, water molecules assemble into an intricate crystal lattice with regularly spaced cages filled by methane (Fig. 2a).Despite commercial relevance in natural gas processing, the mechanism of gas-hydrate formation remains poorly understood, complicated by the many-body character of the nucleation process and the competition between different crystal forms 18 .Studying the nucleation mechanism is challenging for experiments and, due to the exceeding rarity of the events, impossible in equilibrium MD.
Within hours of computing time, the AI extracted the nucleation mechanism from 2225 TPs showing the formation of methane clathrates, corresponding to a total of 445.3 µs of simulated dynamics.The trajectories were recently produced by extensive transition path sampling (TPS) simulations at four different temperatures, and provided a pre-existing training set for our AI 19 .We described molecular configurations by using 22 features commonly used to describe nucleation processes (SI Table S3).We considered the temperature T at which a TP was generated as an additional feature, and trained the AI on the cumulative trajectories.We showed that the learned committor as a function of temperature is accurate by validating its predictions for 160 independent configurations (Fig. 2b).By leaving out data at T = 280 K or 285 K in the training, we show that the learned committor satisfactorily interpolates and extrapolates to thermodynamic states not sampled (SI Fig. S2).
Temperature T is the most critical factor for the outcome of a simulation trajectory, followed by the number n w of surface water molecules and the number n c of 5 12 6 2 cages with 12 pentagons and two hexagons (Fig. 2c).All three play an essential role in the classical theory of homogeneous nucleation 19 .The activation free energy ∆G for nucleation is determined by the size of the growing nucleus, parametrized by the amount of surface water and-in case of a crystalline structure-the number of 5 12 6 2 cages.Temperature determines, through the degree of supersaturation, the size of the critical nucleus, the nucleation free energy barrier height and the rate.
Symbolic regression distilled a mathematical expression revealing a temperature dependent switch in the nucleation mechanism.The mechanism is quantified by q(n w , n c , T ) (Fig. 2d; SI Table S3).At low temperatures, the size of the nucleus alone decides on growth.At higher temperatures, the number of 5 12 6 2 water cages gains in importance, as indicated by curved iso-committor surfaces (Fig. 2d).This mechanistic model, generated in an autonomous and completely data-driven way, reveals the switch from amorphous growth at low temperatures to crystalline growth at higher temperatures 19,20 .

AI reveals competing pathways for membrane-protein complex assembly
Membrane protein complexes play a fundamental role in the organization of living cells.Here we investigate the assembly of the transmembrane homodimer of the saturation sensor Mga2 in a lipid bilayer in the quasi-atomistic Martini representation (Fig. 3a) 21 .In extensive equilibrium MD simulations, the spontaneous association of two Mga2 transmembrane helices has been observed, yet no dissociation occurred in approximately 3.6 milliseconds (equivalent to more than six months of calculations) 21 .
Our reinforcement learning AI approach is naturally parallelizable, which enabled us to sample nearly 4,000 dissociation events in 20 days on a parallel supercomputer (Fig. 3b).MD time integration incurs the highest computational cost.However, a single AI agent can simultaneously perform virtual experiments on an arbitrary number of copies of the physical model (by guiding parallel Markov chain Monte Carlo sampling processes), and learn from all of them by training on the cumulative outcomes.
We featurized molecular configurations using contacts between corresponding residues along the two helices and included, for reference, a number of hand-tailored features describing the organization of lipids around the proteins 22 (SI Table S5).We validated the model against committor data for 548 molecular configurations not used in training, and found the predictions to be accurate across the entire transition region between bound and unbound states (Fig. 3c).
In a remarkable reduction of dimensionality, symbolic regression achieved an accurate representation of the learned committor as a simple function of just two aminoacid contacts (Fig. 3d; SI Table S6).We projected all sampled TPs on the plane defined by these two contacts, calculated the distances between them, and performed a hierarchical trajectory clustering (Fig. 3e).TPs organize in two main clusters that differ in the order of events during the assembly process-starting from the top (Fig. 3f) or the bottom (Fig. 3g)-revealing two competing assembly pathways.Unexpectedly 22 , helix-dimer geometry alone predicts assembly progress, which implies that the lipid "solvent" is implicitly encoded, unlike the water solvent in ion-pair formation 16 .

Maximum likelihood estimation of the committor function
The committor p B (x) is the probability that a trajectory initiated at configuration x with Maxwell-Boltzmann velocities reaches the (meta)stable state B before reaching A.
Trajectory shooting thus constitutes a Bernoulli process.We expect to observe n A and n B trajectories to end in A and B, respectively, with binomial probability p(n A , n B |x) Here we ignore the correlations that arise in fast inertia-dominated transitions for trajectories shot off with opposite initial velocities 10,16 .We model the unknown committor with a parametric function and estimate its parameters w by maximizing the likelihood L 5,13 .We ensure that 0 ≤ p B (x) ≤ 1 by writing the committor in terms of a sigmoidal activation function, p B [q(x|w)] = 1/(1 + exp[−q(x|w)]).Here we model the log-probability q(x|w) using a neural network 13 and represent the configuration with a vector x of features.For N > 2 states S, the multinomial distribution provides a model for p(n 1 , n 2 , ..., n N |x), and writing the committors to states S in terms of the softmax activation function ensures normalization, Training points from transition path sampling TPS 4,23 is a powerful Markov chain Monte Carlo method in (transition) path space to sample the TPE.The two-way shooting move is an efficient proposal move in TPS 4 .It consists of randomly selecting a shooting point x SP on the current TP χ according to probability p sel (x SP |χ), drawing random Maxwell-Boltzmann velocities, and propagating two trial trajectories from x SP until they reach either one of the states.Because one of the trial trajectories is propagated after first inverting all momenta at the starting point, i.e., it is propagated backwards in time, a continuous TP can be constructed if both trials end in different states.Given a TP χ, a new TP χ generated by two-way shooting is accepted into the Markov chain with probability 24 p acc (χ |χ) = min (1, Knowing the committor it is possible to increase the rate at which TPs are generated by biasing the selection of shooting points towards the transition state ensemble 24 , i.e., regions with high reactive probability p(TP|x).For the two-state case, this is equivalent to biasing towards the p B (x) = 1 /2 isosurface defining the transition states with q(x) = 0. To construct an algorithm which selects new shooting points biased towards the current best guess for the transition state ensemble and which iteratively learns to improve its guess based on every newly observed shooting outcome, we need to balance exploration with exploitation.To this end, we select the new shooting point x from the current TP χ using a Lorentzian distribution centered around the transition state ensemble, p sel (x|χ) = 1/ x ∈χ [(q(x) 2 + γ 2 )/(q(x ) 2 + γ 2 )], where larger values of γ lead to an increase of exploration.The Lorentzian distribution provides a trade-off between production efficiency and the occasional exploration away from the transition state, which is necessary to sample alternative reaction channels.

Real-time validation of committor model prediction
The relation between the committor and the transition probability 10 enables us to calculate the expected number of TPs generated by shooting from a configuration x.We validate the learned committor on-thefly by estimating the expected number of transitions before shooting from a configuration and comparing it with the observed shooting result.The expected number of transitions n exp TP calculated over a window containing the k most recent two-way shooting 4 attempts is , where p B (x i , i) is the committor estimate for trial shooting point x i at step i before observing the shooting result.We initiate learning when the predicted (n exp TP ) and actually generated number of TPs (n gen TP ) differ.We define an efficiency factor, α eff = min (1, (1 − n gen TP /n exp TP ) 2 ), where a value of zero indicates perfect prediction.By training only when necessary we avoid overfitting.Here we use α eff to scale the learning rate in the gradient descent algorithm.Additionally, no training takes place if α eff is below a certain threshold (specified further below for each system).

Distilling explicit mathematical expressions for the committor
In any specific molecular event, we expect that only few of the many degrees of freedom actually control the transition.We identify the inputs to the committor model that have the largest role in determining its output after training.To this end we first calculate a reference loss, l ref = l(w, θ), over the unperturbed training set to compare to the values obtained by perturbing every input one by one 25 .We then average the loss l(w, θi ) over ≥ 100 perturbed training sets θi with randomly permuted values of the input coordinate i in the batch dimension.The average loss difference ∆l i = l(w, θi ) − l ref is large if the ith input strongly influences the output of the trained model, i.e., it is relevant for predicting the committor.
In the low-dimensional subspace consisting of only the most relevant inputs z, symbolic regression generates compact mathematical expressions that approximate the full committor.Our implementation of symbolic regression is based on the python package dcgpy 26 and uses a genetic algorithm with a (N + 1) evolution strategy.In every generation, N new expressions are generated through random changes to the mathematical structure of the fittest expression of the parent generation.A gradient based optimization is subsequently used to find the best parameters for every expression.The fittest expression is then chosen as parent for the next generation.The fitness of each trial expression p B (z) is measured by l sr (p B |θ) ≡ − log L[p B (z sp )] + λC, where we added the regularization term λC to the log-likelihood in order to keep expressions simple and avoid over-fitting.Here λ > 0 and C is a measure of the complexity of the trial expression, estimated in our case by the number of mathematical operations.

Assembly of LiCl in water
We investigated the formation of lithium chloride ion pairs in water to asses the ability of our AI agent to accurately learn the committor for transitions that are strongly influenced by solvent degrees of freedom.We used two different system setups, one consisting of only one ion pair in water and one with a number of ions corresponding to a 1 molar (1M) concentration.
All MD simulations were carried out in cubic simulation boxes using the Joung and Cheatham forcefield 27 together with TIP3P 28 water.The 1M simulation box contained 37 lithium and 37 chloride ions, solvated with 2104 TIP3P water molecules, while the other box contained the single ion pair solvated with 370 TIP3P water molecules.We used the openMM MD engine 29 to propagate the equations of motion in time steps of ∆t = 2 fs with a velocity Verlet integrator with velocity randomization 30 from the python package openmmtools.After an initial NPT equilibration at P = 1 bar and T = 300 K, all production simulations were performed in the NVT-Ensemble at a temperature of T = 300 K.The friction was set to 1/ps.Non-bonded interactions were calculated using a particle mesh Ewald scheme 31 with a cutoff of 1 nm and an error tolerance of 0.0005.In TPS, the fully assembled and disassembled states were defined according to interionic distances r LiCl ≤ 0.23 nm and r LiCl ≥ 0.48 nm, respectively.
The committor of a configuration is invariant under global translations and rotations in the absence of external fields, and it is additionally invariant with respect to permutations of identical particles.We therefore chose to transform the systems coordinates from the Cartesian space to a representation that incorporates the physical symmetries of the committor.To achieve an almost lossless transformation, we use the interionic distance to describe the solute and we adapted symmetry functions to describe the solvent configuration 32 .Symmetry functions have been developed originally to describe molecular structures in neural network potentials 17,33 , but have also been successfully used to detect and describe different phases of ice in atomistic simulations 34 .These functions describe the environment surrounding a central atom by summing over all identical particles at a given radial distance.The G 2 i type of symmetry function quantifies the density of solvent molecules around a solute atom i in a shell centered at r s , where the sum runs over all solvent atoms j of a specific atom type, r ij is the distance between the central atom i and atom j and η controls the width of the shell.The function f c (r) is a Fermi cutoff defined as: which ensures that the contribution of distant solvent atoms vanishes.The scalar parameter α c controls the steepness of the cutoff.The G 5 i type of symmetry function additionally probes the angular distribution of the solvent around the central atom i, where the sum runs over all distinct solvent atom pairs, ϑ ijk is the angle spanned between the two solvent atoms and the central solute atom, the parameter ζ is an even number that controls the sharpness of the angular distribution, and λ = ±1 sets the location of the minimum with respect to ϑ ijk at π and 0, respectively.See SI Table S1 for the parameter combinations used.We scaled all inputs to lie approximately in the range [0, 1] to increase the numerical stability of the training.In particular, we normalized the symmetry functions by dividing them by the expected average number of atoms (or atom pairs) for an isotropic distribution in the probing volume (see SI for mathematical expressions of the normalization constants as a function of the parameters).
Due to the expectation that most degrees of freedom of the system do not control the transition, we designed neural networks that progressively filter out irrelevant inputs and build a highly non-linear function of the remaining ones.We therefore used a pyramidal stack of five residual units 35,36 , each with four hidden layers.The number of hidden units per layer is reduced by a constant factor f = (10/221) 1/4 after every residual unit block and decreases from 221 in the first unit to 10 in the last.Additionally, a dropout of 0.1f i , where i is the residual unit index ranging from 0 to 4, is applied after every residual block.Optimization of the network weights is performed using the Adam gradient descent 37 .Training was performed after every third TPS Monte Carlo step for one epoch with a learning rate of lr = α eff 10 −3 , if lr ≥ 10 −4 .The expected efficiency factor α eff was calculated over a window of k = 100 TPS steps.We performed all deep learning with custom written code based on keras 38 .The TPS simulations were carried out using a customized version of openpathsampling 39,40 together with our own python module.We selected the five most relevant coordinates for symbolic regression runs.We regularized the produced expressions by penalizing the total number of elementary mathematical operations with λ = 10 −6 and λ = 10 −7 .The contributions of each atom in the system to the committor (Fig. 1b) was calculated as the magnitude of the gradient of the reaction coordinate q(x) with respect to its Cartesian coordinates.All gradient magnitudes were scaled with the inverse atom mass.

Nucleation of methane clathrates
We modelled water with the TIP4P/Ice model 41 and methane with the united atom Lennard-Jones interactions ( = 1.22927 kJ/mol and σ = 3.700 Å), which reproduce experimental measurements well 42 .MD simulations were performed using OpenMM 7.1.1 29, integrating the equations of motion with the Velocity Verlet with velocity randomisation (VVVR) integrator from openmmtools 30 .The integration time step was set to 2 fs.Hydrogen bond lengths were constrained 43 .The van der Waals cutoff distance was 1 nm.Long range interactions were handled by the Particle Mesh Ewald technique.The MD simulations were performed in the NPT ensemble using the VVVR thermostat (frequency of 1 ps) and a Monte Carlo barostat (frequency of 4 ps).TPS simulations were performed with the OpenPathSampling package 39,40 using the CUDA platform of OpenMM on NVIDIA GeForce GTX TITAN 1080Ti GPUs.The saving frequency of the frames was every 100 ps.TPS and committor simulations were carried out at four different temperatures T = 270 K, 275 K, 280 K and 285 K (see SI Table S4 for details).The committor values, which were used only for the validation, were obtained by shooting between 6 and 18 trajectories per configuration.The disassembled (liquid state) and assembled (solid) states were defined in terms of the mutually coordinated guest (MCG) numbers as in Ref. 19 .
We used 24 different features to describe size, crystallinity, structure, and composition of the growing methane-hydrate crystal nucleus (SI Table S3).In addition to the features describing molecular configurations we used temperature as an input to the neural networks and the symbolic regression.In a pyramidal feed forward network with 9 layers, we reduced the number of units per layer from 25 at the input to two in the last hidden layer.The network was trained on the existing TPS data for all temperatures, leaving out 10 % of the shooting points as test data.We stopped the training after the loss on the test set did not decrease for 10000 epochs and used the model with the best test loss.We used the three most relevant coordinates as inputs for symbolic regression runs with a penalty on the total number of elementary mathematical operations using λ = 10 −5 .

Mga2 transmembrane dimer assembly in lipid membrane
We used the coarse-grained Martini force field (v2.2) [44][45][46][47] to describe the assembly of the alpha-helical transmembrane homodimer Mga2.All MD simulations were carried out with gromacs v4.6.7 [48][49][50][51] with an integration timestep of ∆t = 0.02 ps, using a cubic simulation box containing the two identical 30 amino acid long alpha helices in a lipid membrane made of 313 POPC molecules.The membrane spans the box in the xy plane and was solvated with water (5348 water beads) and NaCl ions corresponding to a concentration of 150 mM (58 Na + , 60 Cl − ).A reference temperature of T = 300 K was enforced using the v-rescale thermostat 52 with a coupling constant of 1 ps separately on the protein, the membrane, and the solvent.A pressure of 1 bar was enforced separately in the xy plane and in z using a semiisotropic Parrinello-Rahman barostat 53 with a time constant 12 ps and compressibility 3 • 10 −4 bar −1 .
To describe the assembly of the Mga2 homodimer we used 28 interhelical pairwise distances between the backbone beads of the two helices together with the total number of interhelical contacts, the distance between the helix centers of mass, and a number of hand-tailored features describing the organization of lipids around the two helices (SI Table S5).To ensure that all network inputs lie approximately in [0, 1], we used the sigmoidal function f (r) = (1 − r/R 0 ) 6 /(1 − r/R 0 ) 12 with R 0 = 2 nm for all pairwise distances, while we scaled all lipid features using the minimal and maximal values taken along the transition.The assembled and disassembled states are defined as configurations with ≥ 130 interhelical contacts and with helix-helix center-of-mass distances d CoM >= 3 nm, respectively.
The neural network used to fit the committor is implemented using keras 38 and consists of an initial 3-layer pyramidal part in which the number of units decreases from the 36 inputs to 6 in the last layer using a constant factor of (6/36) 1/2 followed by 6 residual units 35,36 , each with 4 layers and 6 neurons per layer.A dropout of 0.01 is applied to the inputs and the network is trained using the Adam gradient descent protocol with a learning rate of lr = 0.0001 37 .
To investigate the assembly mechanism of Mga2, we distributed our reinforcement learning AI on multiple nodes of a high performance computer cluster.A single AI guided 500 independent TPS chains, each of which ran on a single computing node.The 500 TPS simulations were initialized with random initial TPs.The neural network used to select the initial shooting points was trained on preliminary shooting attempts (8044 independent shots from 1160 different points).After two rounds (two steps in each of the 500 independent TPS chains), we updated the committor model by training on all new data.We retrained again after the sixth round.No further training was required, as indicated by consistent numbers of expected and observed counts of TPs.We performed another 14 rounds for all 500 TPS chains to harvest TPs.Shooting point selection, TPS setup and neural network training were fully automated in python code using MDAnalysis 54,55 , numpy 56 and our custom python package.
The input importance analysis revealed the total number of contacts n contacts as the single most important input (SI Fig. S3).However, no expression generated by symbolic regression as a function of n contacts alone was accurate in reproducing the committor.It is likely that n contacts is used by the trained network only as a binary switch to distinguish the two different regimes-close to the bound or to the unbound states.We therefore restricted the input importance analysis to training points close to the unbound state.The results reveals that the network uses various interhelical contacts that approximately retrace a helical pattern (SI Fig. S3).We performed symbolic regression on all possible combinations made by one, two, or three of the seven most important input coordinates (SI Table S6).The best expressions in terms of the loss were selected using validation committor data that had not been used during the optimization.This validation set consists of committor data for 516 configurations with 30 trial shots each and 32 configurations with 10 trial shots.
To asses the variability in the observed reaction mechanisms, we performed a hierarchical clustering of all TPs projected into the plane defined by the contacts 9 and 22, which enter the most accurate parametrization generated by symbolic regression.We then used dynamic time warping 57 to calculate the pairwise similarity between all TPs for the clustering, which we performed using the scipy clustering module 58,59 .To reflect the reactive flux 7 , the path density plots (Fig. 3f, g) are histogrammed according to the number of paths, not the number of configurations.If a cell is visited multiple times during a path the contribution to the total histogram in this cell is still only one.
CONCLUDING REMARKS: BEYOND MOLECULAR SELF-ORGANIZATION AI-driven trajectory sampling is general and can immediately be adapted to sample many-body dynamics with a notion of "likely fate" similar to the committor.This fundamental concept of statistical mechanics extends from the game of chess 60 over protein folding 3,9 to climate modelling 61 .The simulation engine-molecular dynamics in our case-is treated like a black box and can be replaced by other dynamic processes, reversible or not.Both the statistical model defining the loss function and the machine learning technology can be tailored for specific problems.More sophisticated models will be able to learn more from less data or incorporate experimental constraints.Simpler regression schemes 5 can replace neural networks 13 when the cost of sampling trajectories severely limits the volume of training data.
AI-driven mechanism discovery readily integrates advances in machine learning applied to force fields 17,62 , sampling [63][64][65] , and molecular representation 17,62,66 .Increasing computational power and advances in symbolic AI will enable algorithms to distill ever more accurate mathematical descriptions of the complex processes hidden in high-dimensional data 67 .As illustrated here, autonomous AI-driven sampling and model validation combined with symbolic AI can support the scientific discovery process.S1.Symmetry functions used to describe solvent configurations in the formation of lithium-chloride ion pairs.For symmetry functions of type G 5 we used a total of 10 different parameter combinations for each value of rs.A cutoff of rcut = 1 nm is used in the Fermi cutoff function.xi(ri) = 1−(r i /(2 nm)) 6 1−(r i /(2 nm)) 12 , where ri is the distance between the ith residue on each helix; index 0 corresponds to ASN1034, index 28

Fig. 1 .
Fig. 1.Reinforcement learning of ion assembly mechanism.a, Schematic of AI-driven reinforcement learning by path sampling.b, Snapshots along a TP showing the formation of a LiCl ion pair (right to left) in an atomistic MD simulation.Water is shown as sticks, Li + as a small sphere and Cl − as a large sphere.Atoms are colored according to their contribution to the reaction progress from low (blue) to high (red), as quantified by their contribution to the gradient of the reaction coordinate q(x|w).c, Self-consistency.Cumulative counts of the generated (blue line) and expected (orange dashed line) number of transition events.The green line shows the cumulative difference between the observed and expected counts.The inset shows a zoom-in on the first 1000 iterations.d, Validation of the learned committor.Cross-correlation between the committor predicted by the trained network and the committor obtained by repeated sampling from molecular configurations on which the AI did not train.The average of the sampled committors (blue line) and their standard deviation (orange shaded) are calculated by binning along the predicted committor.e, Input relevance for all 221 input coordinates used for deep learning.f, Schematic depiction of the solvent reorientation around the Li + ion as reported by the most relevant symmetry function x12.g, Simplified assembly models q0 and q1 obtained by symbolic regression at strict and gentle regularization, respectively.Scatter plots show independent validations of their accuracy.h, Transferability of the learned committor.Cross-correlation between sampled committors for 1M LiCl solution and predictions of committor trained on data for a single LiCl ion pair.

aFig. 2 .
Fig. 2. Data-driven discovery of methane-clathrate nucleation mechanism.a, Molecular configurations illustrating the nucleation process extracted from an atomistic MD simulation in explicit solvent.The nucleus forms in water, grows, and leads to the clathrate crystal.5 12 6 2 (blue) and 5 12 (red) water cages (lines) contain correspondingly colored methane molecules (spheres).Methane molecules near the growing solid nucleus are shown as green spheres, and water as gray sticks.Bulk water is not shown for clarity.b, Validation of the learned committor.Cross-correlation between the committor predicted by the trained network and the committor obtained by repeated sampling from molecular configurations on which the AI agent did not train.c, Input importance analysis.The three most important input coordinates are annotated.d, Data-driven quantitative mechanistic model distilled by symbolic regression reveals a switch in nucleation mechanism.Analytical iso-committor surfaces for nw,0 = 2, T0 = 270 K, α = 0.0502, β = 3.17, γ = 0.109 K −1 , δ = 0.0149 (left to right: pB = 1/(1 + e −4 ), 1/2, 1/(1 + e 4 )).The structural insets illustrate the two competing mechanisms at low and high temperature.

Fig. 3 .
Fig. 3. Competing pathways of transmembrane dimer assembly in lipid membrane.a, Snapshots during a Mga2 dimerization event (right to left).The transmembrane helices are shown as orange surfaces, the lipid molecules in grey, and water in cyan.b, Self-consistency.Cumulative counts of the generated (blue line) and expected (orange dashed line) number of transitions.The green curve shows the cumulative difference between the observed and expected counts.c, Validation of the learned committor.Cross-correlation between the committor predicted by the trained network and the committor obtained by repeated sampling from molecular configurations on which the AI did not train.The average of the sampled committors (blue line) and their standard deviation (orange shaded) are calculated by binning along the predicted committor.d, Schematic representation of the two most relevant coordinates, the interhelical contacts at position 9 and 22. e, Hierarchical clustering of all TPs.Dendrogram as a function of TP similarity (dynamic time warping, see Methods) calculated in the plane defined by contacts 9 and 22. f, g, Path density (gray shading) for the two main clusters (f, green; and g, orange in panel E) calculated in the plane defined by contacts 9 and 22.For each cluster, one representative TP is shown from unbound (yellow) to bound (red).The isolines of the committor, as predicted by the symbolic regression qB(x9, x22), are shown as labelled solid lines.

N
S=1 p S = 1.The loss function l(w|θ) used in the training is the negative logarithm of the likelihood L.
. Interpolation and extrapolation in temperatures of methane clathrate nucleation models.Cross-correlation between learned committor and the committor as obtained by repeated sampling for two models which are trained on only three of the four temperatures available in the training set.(Left) Committor model trained only on T = 270 K, 275 K and 285 K , i.e. leaving out T = 280 K, to assess the model's ability to interpolate.(Right) Model trained on T = 270 K, 275 K and 280 K to assess the model's ability to extrapolate.Figure S3.Input importance analyses for Mga2 transmembrane assembly, by using all training points (top panel), and a subset with ncontacts < 0.01 (bottom panel), corresponding to training points close to the unbound state.The height of each bar is the average over 50 independent analyses, while the bars indicate one standard deviation.All values are normalized to the largest importance in each set.

Table S3 .
Overview of the features used to describe the methane clathrate nucleation.Features are grouped by category, the indices are used for the input to neural networks and symbolic regression.