Recipe extraction by artificial intelligence aids materials fabrication
In recent years, research efforts such as the Materials Genome Initiative and the Materials Project have produced a wealth of computational tools for designing new materials useful for a range of applications, from energy and electronics to aeronautics and civil engineering.
Developing processes for producing those materials has continued to depend on a combination of experience, intuition, and manual literature reviews.
A team of researchers at MIT, the University of Massachusetts at Amherst, and the University of California at Berkeley hope to close that materials-science automation gap, with a new artificial-intelligence system that would pore through research papers to deduce “recipes” for producing particular materials.
“Computational materials scientists have made a lot of progress in the ‘what’ to make — what material to design based on desired properties,” says Elsa Olivetti, the Atlantic Richfield Assistant Professor of Energy Studies in MIT’s Department of Materials Science and Engineering (DMSE). “But because of that success, the bottleneck has shifted to, ‘Okay, now how do I make it?’”
The researchers envision a database that contains materials recipes extracted from millions of papers. Scientists and engineers could enter the name of a target material and any other criteria — precursor materials, reaction conditions, fabrication processes — and pull up suggested recipes.
As a step toward realizing that vision, Olivetti and her colleagues have developed a machine-learning system that can analyze a research paper, deduce which of its paragraphs contain materials recipes, and classify the words in those paragraphs according to their roles within the recipes: names of target materials, numeric quantities, names of pieces of equipment, operating conditions, descriptive adjectives, and the like.
In a paper appearing in the latest issue of the journal Chemistry of Materials, they also demonstrate that a machine-learning system can analyze the extracted data to infer general characteristics of classes of materials — such as the different temperature ranges that their synthesis requires — or particular characteristics of individual materials — such as the different physical forms they will take when their fabrication conditions vary.
Olivetti is the senior author on the paper, and she’s joined by Edward Kim, an MIT graduate student in DMSE; Kevin Huang, a DMSE postdoc; Adam Saunders and Andrew McCallum, computer scientists at UMass Amherst; and Gerbrand Ceder, a Chancellor’s Professor in the Department of Materials Science and Engineering at Berkeley.
Filling in the gaps
The researchers trained their system using a combination of supervised and unsupervised machine-learning techniques. “Supervised” means that the training data fed to the system is first annotated by humans; the system tries to find correlations between the raw data and the annotations. “Unsupervised” means that the training data is unannotated, and the system instead learns to cluster data together according to structural similarities.
Because materials-recipe extraction is a new area of research, Olivetti and her colleagues didn’t have the luxury of large, annotated data sets accumulated over years by diverse teams of researchers. Instead, they had to annotate their data themselves — ultimately, about 100 papers.
By machine-learning standards, that’s a pretty small data set. To improve it, they used an algorithm developed at Google called Word2vec. Word2vec looks at the contexts in which words occur — the words’ syntactic roles within sentences and the other words around them — and groups together words that tend to have similar contexts. So, for instance, if one paper contained the sentence “We heated the titanium tetrachloride to 500 C,” and another contained the sentence “The sodium hydroxide was heated to 500 C,” Word2vec would group “titanium tetrachloride” and “sodium hydroxide” together.
With Word2vec, the researchers were able to greatly expand their training set, since the machine-learning system could infer that a label attached to any given word was likely to apply to other words clustered with it. Instead of 100 papers, the researchers could thus train their system on around 640,000 papers.
Tip of the iceberg
To test the system’s accuracy, however, they had to rely on the labeled data, since they had no criterion for evaluating its performance on the unlabeled data. In those tests, the system was able to identify with 99 percent accuracy the paragraphs that contained recipes and to label with 86 percent accuracy the words within those paragraphs.
The researchers hope that further work will improve the system’s accuracy, and in ongoing work they are exploring a battery of deep learning techniques that can make further generalizations about the structure of materials recipes, with the goal of automatically devising recipes for materials not considered in the existing literature.
Much of Olivetti’s prior research has concentrated on finding more cost-effective and environmentally responsible ways to produce useful materials, and she hopes that a database of materials recipes could abet that project.
Once operational, devising new recipes for construction materials might just be a case of asking your computer for advice.