We've updated our Privacy Policy to make it clearer how we use your personal data. We use cookies to provide you with a better experience. You can read our Cookie Policy here.

Advertisement

Automated, Reproducible Workflows Can Speed Up EM Image Analysis

Automated, Reproducible Workflows Can Speed Up EM Image Analysis  content piece image
Listen with
Speechify
0:00
Register for free to listen to this article
Thank you. Listen to this article using the player above.

Want to listen to this article for FREE?

Complete the form below to unlock access to ALL audio articles.

Read time: 2 minutes

Electron microscopy (EM) image analysis can be an immensely useful process for researchers, but too often is also laborious and time-consuming. New software packages, like Thermo Fisher Scientific’s Avizo2D, are changing that. If Avizo2D is any indication, the software of the future will make image analysis a simple task for researchers without years of imaging expertise through recipes that combine AI learning and Python-coded modules. We caught up with Laurent Billy, Director of Product Management, Applications Software at Thermo Fisher Scientific to find out more about the future of EM image analysis.

Ruairi Mackenzie (RM): What information can researchers acquire from their electron microscopy (EM) images?

Laurent Billy (LB):
EM images preserve fine details of a sample’s microstructure. Using EM, users can generate a statistical representation on the key relationships within and between phases in a sample. For example, quantifying porosity, fracture densities and morphologies, as well as any geometrical measure of grains, nanoparticles, etc., are all possible to extract accurately with EM software, such as Avizo2D. Collecting such data on a number of samples opens the possibility to utilize modern statistical and computational methods to look for the important relationships between microstructure and performance that are at the heart of materials science.

RM: What are the current bottlenecks in EM image analysis?

LB:
One of the major bottlenecks in EM image analysis is trusting that a given processing routine is repeatable regardless of who has worked on extracting data from the image. Traditional approaches typically require a lot of manual steps that include choices made by users based on their skills and expertise, which can lead to differing results. Newer software programs can provide a highly automated framework that places an agreed upon and tested workflow at the center of the image analysis process, thereby providing a more robust and repeatable approach to mitigate the historical problems of user bias. Another major bottleneck is understanding what can and cannot be extracted from a given image. Parameters such as image resolution, the amount of noise present and the separation of grey scale values necessary to accurately separate phases or features of interest can all limit or affect findings. Accurate data from image processing requires quality image data from the beginning. Understanding the requirements on the image acquisition side is key to successfully extracting what you want via image processing techniques.

RM: What unique challenges are faced in designing analysis software for EM images?

LB:
The main challenge in creating an image analysis software is balancing ease of use with integrating advanced approaches. The landscape of users that can benefit from image analysis is extremely varied, so making a product that provides value for novices and experienced users can be difficult. Cutting-edge software will provide a variety of advanced image processing approaches such as AI, and a full suite of advanced image segmentation tools in an approachable package that will enable users with all levels of experience get the most out of their data.

RM: Expertise is currently required to properly make use of toolkits for EM image analysis. When will automation make EM image analysis available to even inexperienced researchers
?

LB: It is difficult to answer exactly ‘when’, but we think that within two to three years, emerging technologies such as Machine and Deep Learning will have advanced enough for us to refine automation even more. Numerous image analysis applications will have been tackled thanks to these new techniques, and efficient automated solutions will be available for many routine tasks. Furthermore, broad collections of pre-defined analysis workflows based on AI or on traditional image processing tools will become more widely available, providing ‘push-button’ templates to non-image processing experts for many specific use cases. That said, automated software tools available today are already helping scientists assess and enhance imaging data quality at the time of acquisition, facilitating further analysis.      


Laurent Billy was speaking to Ruairi J Mackenzie, Science Writer for Technology Networks