The optimization of reactions used to synthesize target compounds is pivotal to chemical research and discovery, whether in developing a route for manufacturing a life-saving medicine1 or unlocking the potential of a new material2. But reaction optimization requires iterative experiments to balance the often conflicting effects of numerous coupled variables, and frequently involves finding the sweet spot among thousands of possible sets of experimental conditions. Expert synthetic chemists currently navigate this expansive experimental void using simplified model reactions, heuristic approaches and intuition derived from observation of experimental data3. Writing in Nature, Shields et al.4 report machine-learning software that can optimize diverse classes of reaction with fewer iterations, on average, than are needed by humans.
Machine learning has emerged as a useful tool for various aspects of chemical synthesis, because it is ideally suited to extrapolating predictive models that are used to solve synthetic problems by recognizing patterns in multidimensional data sets5. However, chemists need to learn new skills to correctly deploy machine learning in their research, thus limiting the widespread adoption of this approach. Shields et al. address this problem by reporting an open-source software toolkit that can be easily adopted by chemists.
A range of machine-learning methods are now available, and the first task when developing any new application is to choose the most appropriate method. The choice depends on the type of data (numbers, pictures and so on), the number of data points available to train the system, and the desired output6. Wrong choices can lead to false correlations being made during training and ineffective predictive models.
To train their model, Shields and colleagues selected a method that uses a machine-learning approach called Bayesian optimization. Bayesian-optimization algorithms have proved exceptionally effective in other applications, but the authors are among the first to develop a reaction-optimization toolkit that uses this approach. Their open-source software contains all the components necessary for researchers to carry out Bayesian reaction optimization for systems that have any number of experimental variables.
The toolkit first uses a simple workflow to carry out a quantum-mechanical calculation that encodes the reaction of interest in a machine-readable format involving what are known as chemical descriptors7. Reaction parameters that can be represented as a continuous series of numbers, such as temperature and concentration, are already in a form that can be interpreted by the algorithm. However, categorized reaction parameters, such as the identity of the solvent or catalyst, need to be provided by the chemist using one of several commonly applied molecular notations.
Each molecule in the reaction is then decomposed by the toolkit into a subset of numerical values that describe the molecule’s inherent chemical properties (molecular weight, charge density, bond strengths and so on), which can be interpreted by the algorithm8. Some of the biggest pitfalls in the application of machine-learning methods to chemical systems arise in the execution of this decomposition process. After multiple trials, Shields and co-workers arrived at a balanced approach that can be generalized for a variety of reactions involving many diverse chemicals.
The second part of the workflow is the Bayesian-optimization step. As the authors’ work highlights, Bayesian algorithms are well suited for reaction optimization because they excel at handling relatively small data sets9. Starting from sparse data, the algorithm creates a surrogate model in an attempt to mathematically define how the input variables (reaction parameters) will affect the output target (the reaction yield or another measure of performance).
At first, the model provides a poor approximation of the reaction system, but the algorithm also evaluates what is learnt when new reaction data are acquired to test the effects of the variables. The algorithm therefore suggests a new experiment for chemists to run, providing specific values for the reaction variables. Once the data from that experiment are available, they are added to the algorithm, which updates the model. The cycle then continues until the reaction performance meets the specified target, or the reagents are exhausted.
Shields et al. successfully applied this workflow to three reaction classes, in which the algorithm varied multiple reaction parameters, including the temperature, solvent and ligand (the molecule that binds to the metal centre of the catalyst). In each case, their algorithm successfully identified optimal reaction conditions using approximately 50 test experiments from a pool of up to 312,500 possible combinations of variables.
After tuning the algorithm using published data sets, the authors statistically evaluated its performance using an optimization game, in which the algorithm competed with expert chemists. The authors selected a reaction that would be optimized in the game, and then defined five reaction variables that could be altered, limiting the players to a fixed set of possibilities for each variable: 12 ligands, 4 bases, 4 solvents, 3 temperatures and 3 concentrations. The researchers then experimentally tested and measured the outcomes of all 1,728 possible combinations of variables.
Next, Shields and colleagues gave 50 expert chemists up to 100 attempts to carry out a virtual optimization: participants selected 5 combinations of variables, and were then shown the experimental outcomes, after which they could select a new batch of 5 combinations to try to achieve the maximum possible reaction yield. Likewise, the algorithm played the game 50 times, but each time starting with random experimental values. The experts made better initial choices, but the algorithm outperformed the players, on average, after the third batch of experiments (Fig. 1).
More notably, the algorithm consistently arrived at greater than 99% yields, which was possible only by using an unusual ligand that was not known to work well for the type of reaction targeted in the game. Overall, the game provided a crucial lesson in cognitive bias: most chemists ended the game early, having used only mainstream reagents, without realizing that they could have further improved the yields by making more-adventurous choices. Shields et al. have thus developed an accessible tool that could be used by non-experts to optimize a wide range of reactions. Importantly, the workflow carries researchers through the encoding and optimization processes for multiple chemistries without requiring changes to the code.
This empowering tool is poised to provide chemists with an alternative approach for reaction optimization, unlocking the many benefits of machine learning. Not only will it accelerate the pace of research in chemical synthesis, but it will also help to increase the range of reaction variables tested. Moreover, the tool eliminates the need for chemists to triage potential experimental variables to mitigate time and material costs. I expect this technology to be widely adopted, enabling rapid optimization of reaction conditions and discoveries off the beaten path.
Nature 590, 40-41 (2021)