We've updated our Privacy Policy to make it clearer how we use your personal data. We use cookies to provide you with a better experience. You can read our Cookie Policy here.


Chemical Mixture Composition Determined Rapidly Using Nothing But Images

Artistic depiction of machine learning analysis of chemical mixture ratios.
Credit: Yasuhide Inokuma.
Listen with
Register for free to listen to this article
Thank you. Listen to this article using the player above.

Want to listen to this article for FREE?

Complete the form below to unlock access to ALL audio articles.

Read time: 1 minute

Have you ever accidentally ruined a recipe in the kitchen by adding salt instead of sugar? Due to their similar appearance, it’s an easy mistake to make. Similarly, checking with the naked eye is also used in chemistry labs to provide quick, initial assessments of reactions; however, just like in the kitchen, the human eye has its limitations and can be unreliable.

To address this, researchers at the Institute of Chemical Reaction Design and Discovery (WPI-ICReDD), Hokkaido University led by Professor Yasuhide Inokuma have developed a machine learning model that can distinguish the composition ratio of solid mixtures of chemical compounds using only photographs of the samples. 

The model was designed and developed using mixtures of sugar and salt as a test case. The team employed a combination of random cropping, flipping and rotating of the original photographs in order to create a larger number of sub images for training and testing. This enabled the model to be developed using only 300 original images for training. The trained model was roughly twice as accurate as the naked eye of even the most expert member of the team.

Want more breaking news?

Subscribe to Technology Networks’ daily newsletter, delivering breaking science news straight to your inbox every day.

Subscribe for FREE

“I think it’s fascinating that with machine learning we have been able to reproduce and even exceed the accuracy of the eyes of experienced chemists,” commented Inokuma. “This tool should be able to help new chemists achieve an experienced eye more quickly.”

After the successful test case, researchers applied this model to the evaluation of different chemical mixtures.  The model successfully distinguished different polymorphs and enantiomers, both of which are extremely similar versions of the same molecule with subtle differences in atomic or molecular arrangement. Distinguishing these subtle differences is important in the pharmaceutical industry and normally requires a more time-consuming process.

The model was even able to handle more complex mixtures, accurately assessing the percentage of a target molecule in a four-component mixture. Reaction yield was also analyzed, determining the progress of a thermal decarboxylation reaction. 

The team further demonstrated the versatility of their model, showing that it could accurately analyze images taken with a mobile phone, after supplemental training was performed. The researchers anticipate a wide variety of applications, both in the research lab and in industry.

“We see this as being applicable in situations where constant, rapid evaluation is required, such as monitoring reactions at a chemical plant or as an analysis step in an automated process using a synthesis robot,” explained Specially Appointed Assistant Professor Yuki Ide. “Additionally, this could act as an observation tool for those who have impaired vision.” 

Reference: Ide Y, Shirakura H, Sano T, et al. Machine learning-based analysis of molar and enantiomeric ratios and reaction yields using images of solid mixtures. Ind Eng Chem Res. 2023:acs.iecr.3c01882. doi: 10.1021/acs.iecr.3c01882

This article has been republished from the following materials. Note: material may have been edited for length and content. For further information, please contact the cited source.