Learning grammars of molecules to build them in the lab – The Hindu

Researchers generate molecular structures using machine learning algorithms, trained on smaller datasets

Researchers generate molecular structures using machine learning algorithms, trained on smaller datasets

We think of molecules as occurring in nature. Large macromolecules lead us to the basis of life. The twentieth century gave us new materials synthesised in the lab. We can now have designer molecules, where we formulate a wish list of properties for material (say, desired tensile strength as well as flexibility) and seek to not merely discover, but also construct, molecules that exhibit such properties. Generating molecules computationally involves the use of Artificial Intelligence (AI) and machine learning algorithms that require large datasets to train on. Moreover, the molecules thus designed may be hard to synthesise. So, the challenge is to circumvent these shortfalls.

Now, researchers from Massachusetts Institute of Technology (MIT) and International Business Machines (IBM) have together devised a method to generate molecules computationally which combines the power of machine learning with what are called graph grammars. This approach requires much smaller datasets (for example, about 100 datasets in the place of 81,000, as the researchers mention) and builds up the molecules in a bottom-up approach. The group has demonstrated this method on naphthalene diisocyanate molecule in a paper that has been reviewed and accepted for presentation at the International Conference on Learning Representations (ICLR 2022).

Artificial intelligence (AI) techniques, especially the use of machine learning algorithms, are in vogue today to find new molecular structures. These methods require tens of thousands of samples to train the neural networks. Also, the designed molecules may not be physically synthesisable. Ensuring synthesisability in these methods may need the incorporation of chemical knowledge, and extracting such knowledge from datasets is a significant challenge.

Chemical datasets with required properties may be very small in number. For instance, some researchers reported in 2019 that datasets on polyurethane property prediction have as few as 20 samples.

If we surmount all these challenges, there is a further problem with typical machine learning algorithms, which is that we cannot explain their results. That is, after discovering a molecule, we cannot figure out how we came up with it. The implication is that if we slightly change the desired properties, we may need to search all over again. Explainable AI is considered one of the grand challenges of contemporary AI research.

One alternative to such deep learning methods is the use of formal grammars. Grammar, in the context of languages, provides rules for how sentences can be constructed from words. We can design chemical grammars that specify rules for constructing molecules from atoms. In the last few years, several research teams have built such grammars. While this approach is hopeful, it calls for extensive expertise in chemistry, and after the grammar is built, incorporating properties from datasets, or optimisation, is hard.

Here, the researchers use mathematical objects called graph grammars for this purpose.

What mathematicians call graphs are networks or webs with nodes and edges between them. In this approach, a molecule is represented as a graph where the nodes are strings of atoms and edges are chemical bonds. A grammar for such structures tells us how to replace a string in a node with a whole molecular structure. Thus, parsing a structure means contracting some substructure; we keep doing this repeatedly until we get a single node.

The model uses machine learning techniques to learn graph grammars from datasets. The algorithm takes as input a set of molecular structures and a set of evaluation metrics (for example, synthesisability).

The grammar is constructed bottom-up, creating rules by contractions; choosing which structures to contract is based on the learning component, a neural network which builds on the chemical information. The algorithm simultaneously performs multiple, randomised searches to obtain multiple grammars as candidates. It still needs to evaluate them, and this is done using the input metrics.

While the method has been demonstrated for use in building molecules, the applications could be far reaching, beyond chemistry.

(The writer is a computer scientist, formerly with The Institute of Mathematical Sciences, Chennai, and currently visiting professor at Azim Premji University, Bengaluru.)

AI techniques used earlier required tens of thousands of samples to train the neural networks. Also, the designed molecules were not always physically synthesisable.

Read the original:
Learning grammars of molecules to build them in the lab - The Hindu

Related Posts

Comments are closed.