Emerging opportunities of using large language models for translation between drug molecules and indications

A drug molecule is a substance that changes an organism’s mental or physical state. Every approved drug has an indication, which refers to the therapeutic use of that drug for treating a particular medical condition. While the Large Language Model (LLM), a generative Artificial Intelligence (AI) technique, has recently demonstrated effectiveness in translating between molecules and their textual descriptions, there remains a gap in research regarding their application in facilitating the translation between drug molecules and indications (which describes the disease, condition or symptoms for which the drug is used), or vice versa. Addressing this challenge could greatly benefit the drug discovery process. The capability of generating a drug from a given indication would allow for the discovery of drugs targeting specific diseases or targets and ultimately provide patients with better treatments. In this paper, we first propose a new task, the translation between drug molecules and corresponding indications, and then test existing LLMs on this new task. Specifically, we consider nine variations of the T5 LLM and evaluate them on two public datasets obtained from ChEMBL and DrugBank. Our experiments show the early results of using LLMs for this task and provide a perspective on the state-of-the-art. We also emphasize the current limitations and discuss future work that has the potential to improve the performance on this task. The creation of molecules from indications, or vice versa, will allow for more efficient targeting of diseases and significantly reduce the cost of drug discovery, with the potential to revolutionize the field of drug discovery in the era of generative AI.


Introduction
Drug discovery is a costly process 1 that identifies chemical entities with the potential to become therapeutic agents 2 .Due to its clear benefits and significance to health, drug discovery has become an active area of research, with researchers attempting to automate and streamline drug discovery 3,4 .Approved drugs have indications, which refer to the use of that drug for treating a particular disease [5, Chapter 5].The creation of molecules from indications, or vice versa, will allow for more efficient targeting of diseases and significantly reduce the cost of drug discovery, with the potential to revolutionize the field of drug discovery.
Large Language Models (LLMs) have become one of the major directions of generative Artificial Intelligence (AI) research, with highly performant models like GPT-3 6 , GPT-4 7 , LLaMA 8 , and Mixtral 9 developed in the recent years and services like ChatGPT reaching over 100 million users 10,11 .LLMs utilize deep learning methods to perform various Natural Language Processing (NLP) tasks, such as text generation 12,13 and neural machine translation 14,15 .The capabilities of LLMs are due in part to their training on large-scale textual data, making the models familiar with a wide array of topics.LLMs have also demonstrated promising performance in a variety of tasks across different scientific fields [16][17][18][19] .Since LLMs work with textual data, the first step is usually finding a way to express a problem in terms of text or language.
An image or a diagram is a typical way to present a molecule, but methods for obtaining textual representations of molecules do exist.One such method is the Simplified Molecular-Input Line-Entry System (SMILES) 20 , which is usually considered as a language for describing molecules.As SMILES strings represent drugs in textual form, we can assess the viability of LLMs in translating between drug molecules and their indications.In this paper, we consider two tasks: drug-to-indication and indication-to-drug, where we seek to generate indications from the SMILES strings of existing drugs, and SMILES strings from indications, respectively.Translation between drugs and the corresponding indication will allow for finding a cure for diseases that have no current treatment, and give clinicians more avenues for patient care.We also release the codebase for the study 1 .
Research efforts have attempted de-novo drug discovery through the use of AI and, more recently, forms of generative AI 21 .There are numerous existing efforts for molecular design and drug discovery using AI, such as GPT-based models using scaffold SMILES strings accompanied with desired properties of the output molecule 22 .Others have used T5 architecture for various tasks, such as reaction prediction 23 and converting between molecular captions and SMILES strings 24 .Additional work in the field is centered around the generation of new molecules from gene expression signatures using generative adversarial networks 25 , training recurrent neural networks on known compounds and their SMILES strings, then fine-tuning for specific agonists of certain receptors 26 , or using graph neural networks to predict drugs and their corresponding indications from SMILES 27 .As such, there is an established promise in using AI for drug discovery and molecular design.Efforts to make data more friendly for AI generation of drugs include the development of the Self-Referencing Embedded Strings (SELFIES) 28 , which can represent every valid molecule.The reasoning is that such a format will allow generative AI to construct valid molecules while maintaining crucial structural information in the string.The collection of these efforts sets the stage for our attempt at generating drug indications from molecules.
With advancements in medicinal chemistry leading to an increasing number of drugs designed for complex processes, it becomes crucial to comprehend the distinctive characteristics and subtle nuances of each drug.This has led to the development of molecular fingerprints, such as the Morgan fingerprint 29 and the MAP4 fingerprint 30 , which use unique algorithms to vectorize the characteristics of a molecule.Computation of fingerprint representations is rapid, and they maintain much of the features of a molecule 31 .Molecular fingerprinting methods commonly receive input in the form of SMILES strings, which serve as a linear notation for representing molecules in their structural forms, taking into account the different atoms present, the bonds between atoms, as well as other key characteristics, such as branches, cyclic structures, and aromaticity 20 .Since SMILES is a universal method of communicating the structure of different molecules, it is appropriate to use SMILES strings for generating fingerprints.Mol2vec 32 feeds Morgan fingerprints to the Word2vec 33 algorithm by converting molecules into their textual representations.BERT 34 -based models have also been used for obtaining molecular representations, including models like MolBERT 35 and ChemBERTa 36 , which are pretrained BERT instances that take SMILES strings as input and perform downstream tasks on molecular representation and molecular property prediction, respectively.Other efforts in using AI for molecular representations include generating novel molecular graphs through the use of reinforcement learning, decomposition, and reassembly 37 and the prediction of 3D representations of small molecules based on their 2D graphical counterparts 38 .
In this paper, we evaluate the capabilities of MolT5, a T5-based model, in translating between drugs and their indications through the two tasks, drug-to-indication and indication-to-drug, using drug data from DrugBank and ChEMBL.The drugto-indication task utilizes SMILES strings for existing drugs as input, with the matching indications of the drug as the target output.The indication-to-drug task takes the set of indications for a drug as input and seeks to generate the corresponding SMILES string for a drug that treats the listed conditions.
We employ all available MolT5 model sizes for our experiments and evaluate them separately across the two datasets.Additionally, we perform the experiments under three different configurations: 1. Evaluation of the baseline models on the entire available dataset 2. Evaluation of the baseline models on 20% of the dataset 3. Fine-tuning the models on 80% of the dataset followed by evaluation on the 20% subset Larger MolT5 models outperformed the smaller ones across all configurations and tasks.It should also be noted that fine-tuning MolT5 models has a negative impact on the performance.
Following these preliminary experiments, we train the smallest available MolT5 model from scratch using a custom tokenizer.This custom model performed better on DrugBank data than on ChEMBL data on the drug-to-indication task, perhaps due to a stronger signal between the drug indications and SMILES strings in their dataset, owing to the level of detail in their indication descriptions.Fine-tuning the custom model on 80% of either dataset did not degrade model performance for either task, and some metrics saw improvement due to fine-tuning.Overall, fine-tuning for the indication-to-drug task did not consistently improve the performance, which holds for both ChEMBL and DrugBank datasets.
While the performance of the custom tokenizer approach is still poor, there is promise in using a larger model and having access to more data.If we have a wealth of high-quality data to train models on translation between drugs and their indications, it may be possible to improve performance and facilitate novel drug discovery with LLMs.

Evaluation of MolT5 Models
We performed initial experiments using MolT5 models from HuggingFace 234 .MolT5 offers three model sizes and fine-tuned models of each size, which support each task of our experiments.For experiments generating SMILES strings from drug indications (drug-to-indication), we used the fine-tuned models MolT5-smiles-to-caption, and for generating SMILES strings from drug indications (indication-to-drug), we used the models MolT5-caption-to-smiles.For each of our Tables, we use the following flags: FT (denotes experiments where we fine-tuned the models on 80% of the dataset and evaluated on the remaining 20% test subset), SUB (denotes experiments where the models are evaluated solely on the 20% test subset), and FULL (for experiments evaluating the models on the entirety of each dataset.
For evaluating drug-to-indication, we employ the natural language generation metrics BLEU 39 , ROGUE 40 , and METEOR 41 , as well as the Text2Mol 42 metric, which generates similarities of SMILES-Indication pairs.As for evaluation of indicationto-drug, we measure exact SMILES string matches, Levenshtein distance 43 , SMILES BLEU scores, the Text2Mol similarity metric, as well as three different molecular fingerprint metrics: MACCS, RDK, and Morgan FTS, where FTS stands for fingerprint Tanimoto similarity 44 , as well as the proportion of returned SMILES strings that are valid molecules.The final metric for evaluating SMILES generation is FCD, or Fréchet ChemNet Distance, which measures the distance between two distributions of molecules from their SMILES strings 45  Tables 1 and 2 show the results of MolT5 experiments on DrugBank and ChEMBL data for drug-to-indication, respectively.Larger models tended to perform better across all metrics for each experiment.Across almost all metrics for the drug-toindication task, on both DrugBank and ChEMBL datasets, the model performed best on the 20% subset data.At the same time, both the subset and full dataset evaluations yielded better results than fine-tuning experiments.As MolT5 models are trained on molecular captions, fine-tuning using indications could introduce noise and weaken the signal between input and target text.The models performed better on DrugBank data than ChEMBL data, which may be due to the level of detail provided by DrugBank for their drug indications.Tables 3 and 4 show the results of MolT5 experiments on DrugBank and ChEMBL data for indication-to-drug, respectively.The tables indicate that fine-tuning the models on the new data worsens performance, reflected in FT experiments yielding worse results than SUB or FULL experiments.Also, larger models tend to perform better across all metrics for each experiment.

Model
In our drug-to-indication and indication-to-drug experiments, we see that fine-tuning the models causes the models to perform worse across all metrics.Additionally, larger models perform better on our tasks.However, in our custom tokenizer experiments, we pretrain MolT5-Small without the added layers of Smiles-to-Caption and Caption-to-Smiles.By fine-tuning the custom pretrained model on our data for drug-to-indication and indication-to-drug, we aim to see improved results.Tables 5 and 6 show the evaluation of MolT5 pretrained with the custom tokenizer on the drug-to-indication and indicationto-drug tasks, respectively.For drug-to-indication, the model performed better on the DrugBank dataset, reflected across all metrics.This performance difference may be due to a stronger signal between drug indication and SMILES strings in their dataset, as their drug indication text goes into great detail.Fine-tuning the model on 80% of either of the datasets did not worsen the performance for drug-to-indication as it did in the baseline results, and some metrics showed improved results.The results for indication-to-drug are more mixed.The model does not consistently perform better across either dataset and fine-tuning the model affects the evaluation metrics inconsistently.

Discussion
In this paper, we proposed a novel task of translating between drugs and indications, considering both drug-to-indication and indication-to-drug subtasks.We focus on generating indications from the SMILES strings of existing drugs and generating SMILES strings from sets of indications.Our experiments are the first attempt at tackling this problem.After conducting experiments with various model configurations and two datasets, we hypothesized potential issues that need further work.We believe that properly addressing these issues could significantly improve the performance of the proposed tasks.
The signal between SMILES strings and indications is poor.In the original MolT5 task (translation between molecules and their textual descriptions), "similar" SMILES strings often had similar textual descriptions.In the case of drug-to-indication and indication-to-drug tasks, similar SMILES strings might have completely different textual descriptions as they are different drugs, and their indications also differ.One could also make a similar observation about SMILES strings that are different: drug indications may be similar.Having no direct relationships between drugs and indications makes it hard to achieve high performance on proposed tasks.We hypothesize that having an intermediate representation that drugs (or indications) map to may improve the performance.As an example, mapping a SMILES string to its caption (MolT5 task) and then caption to indication may be a potential future direction of research.
The signal between drugs and indications is not the only issue: the data is also scarce.Since we do not consider random molecules and their textual descriptions but drugs and their indications, the available data is limited by the number of drugs.In the case of both ChEMBL and DrugBank datasets, the number of drug-indication pairs was under 10000, with the combined size also being under 10000.Finding ways to enrich data may help establish a signal between SMILES strings and indications and could be a potential future avenue for exploration.
Overall, the takeaway from our experiments is that the custom tokenizer approach has promise and benefits from fine-tuning, but its performance has yet to meet our expectations.We also see in our baseline experiments that larger models tend to perform better.By using a larger model and having more data (or data that has a stronger signal between drug indications and SMILES strings), we may be able to successfully translate between drug indications and molecules (i.e., SMILES strings) and ultimately facilitate novel drug discovery.

Methods
This section describes the dataset, analysis methods, ML models, and feature extraction techniques used in this study.Figure 1 shows the flowchart of the process.We adjust the workflow of existing models for generating molecular captions to instead generate indications for drugs.By training LLMs on the connection between SMILES strings and drug indications, we endeavor to one day be able to create novel drugs that treat medical conditions.Our data comes from two databases, DrugBank 46 and ChEMBL 47 , which we selected due to the different ways they represent drug indications.DrugBank gives in-depth descriptions of how each drug treats patients, while ChEMBL provides a list of medical conditions each drug treats.Table 7 outlines the size of each dataset, as well as the length of the SMILES and Indication data.In the case of DrugBank, we had to request access to use the drug indication and SMILES data.The ChEMBL data was available without request but required establishing a database locally to query and parse the drug indication and SMILES strings into a workable format.Finally, we prepared a pickle file for both databases to allow for metric calculation following the steps presented in MolT5 24 .

Models
We conducted initial experiments using the MolT5 model, based on the T5 architecture 24 .The T5 basis of the model gives it textual modality from pretraining on the natural language text dataset Colossal Clean Crawled Corpus (C4) 48 , and the pretraining on 100 million SMILES strings from the ZINC-15 dataset 49 gives the model molecular modality.
In our experiments, we utilized fine-tuned versions of the available MolT5 models: Smiles-to-Caption, fine-tuned for generating molecular captions from SMILES strings, and caption-to-smiles, fine-tuned for generating SMILES strings from molecular captions.However, we seek to evaluate the model's capacity to translate between drug indications and SMILES strings.Thus, we use drug indications in the place of molecular captions, yielding our two tasks: drug-to-indication and indication-to-drug.
The process of our experiments begins with evaluating the baseline MolT5 model for each task on the entirety of the available data (3004 pairs for DrugBank, 6127 pairs for ChEMBL), on a 20% subset of the data (601 pairs for DrugBank, 1225 pairs for ChEMBL), and then fine-tuning the model on 80% (2403 pairs for DrugBank, 4902 pairs for ChEMBL) of the data and evaluating on that same 20% subset.
After compiling the results of the preliminary experiments, we decided to use a custom tokenizer with the MolT5 model architecture.While the default tokenizer leverages the T5 pretraining, the reasoning is that treating SMILES strings as a form of natural language and tokenizing it accordingly into its component bonds and molecules could improve model understanding of SMILES strings and thus improve performance.

MolT5 with Custom Tokenizer
The tokenizer for custom pretraining of MolT5 that we selected came from previous work on adapting transformers for SMILES strings 50 .This tokenizer separates SMILES strings into individual bonds and molecules.Figure 2 illustrates the behavior of both MolT5 and custom tokenizers.Due to computational limits, we only performed custom pretraining of the smallest available MolT5 model, with 77 million parameters.Our pretraining approach utilized the model configuration of MolT5 and JAX5 /Flax6 to execute the span-masked language model objective on the ZINC dataset 48 .Following pretraining, we assessed model performance on both datasets.The experiments comprised three conditions: fine-tuning on 80% (2403 pairs for

Figure 1 .
Figure 1.Overview of the methodology of the experiments: drug data is compiled from ChEMBL and DrugBank and utilized as input for MolT5.Our experiments involved two tasks: drug-to-indication and indication-to-drug.For drug-to-indication, SMILES strings of existing drugs were used as input, producing drug indications as output.Conversely, for drug-to-indication, drug indications of the same set of drugs were the input, resulting in SMILES strings as output.Additionally, we augmented MolT5 with a custom tokenizer in pretraining and evaluated the resulting model on the same tasks.

Figure 2 .
Figure 2.MolT5 and custom tokenizers: MolT5 tokenizer uses the default English language tokenization and splits the input text into subwords.The intuition is that SMILES strings are composed of characters typically found in English text, and pretraining on large-scale English corpora may be helpful.On the other hand, the custom tokenizer method utilizes the grammar of SMILES and decomposes the input into grammatically valid components.

Table 5 .
Results for MolT5 augmented with custom tokenizer, drug-to-indication.

Table 6 .
Results for MolT5 augmented with custom tokenizer, indication-to-drug.