How AI Is Helping Advance TB Research
Complete the form below to unlock access to ALL audio articles.
Manual evaluation of tissue sections using a microscope is a very time-consuming process. The adoption of AI solutions which can automatically recognize and count visual information could help increase the speed and accuracy of image analysis, whilst also freeing up time for pathologists.
Technology Networks recently spoke with Dr Gillian Beamer, a pathologist and assistant professor at Tufts University and Thomas Westerling-Bui, Director, Scientific Strategy and Business Development at Aiforia, to learn how the implementation of a cloud-based platform is helping to advance scientific research on Mycobacterium tuberculosis.
Anna MacDonald (AM): Can you provide an overview of what your typical daily work involves? What were some of the challenges you faced doing this manually?
Gillian Beamer (GB): My typical daily work involves running a research laboratory under biosafety level 3 conditions (part of which is evaluating histology slides of tuberculosis-infected mice), teaching and training students and research technicians, evaluating slides for pathology diagnostic service and slide evaluation for research collaborators using experimental models of animal disease. The challenges of manual annotation include: 1) Time consuming; 2) Tedious; 3) Boring. I would characterize manual annotation as a “labor of love” which I perform for my own research studies where quantification of cells and lesions is necessary for the downstream analyses.
AM: What difference has the implementation of AI made to your research?
GB: A big difference. Fifteen years ago (when I was a pathology resident and graduate student) I began hunting for image recognition tools that could automatically identify complex cellular and tissue patterns. The interest was driven for more accurate comparison of tuberculosis lesions in different strains of mice. Now, I am thrilled that AI platforms are more widely available and I am excited to try out any AI tool for its capacity to extract visual information from histology images.
In our research on tuberculosis, a goal is to generate quantitative data from tissue sections that allow 2D measurements for statistical comparisons of experimental groups. Eventually, we want to generate 3D reconstructions of tuberculosis granulomas from glass-free, microscopic images, so we can make volume comparisons between experimental groups, integrate structural data with functional information and build computer simulations of granulomas that we can manipulate in silico. Currently in our tuberculosis research, we use AI tools and platforms that are commercially available (Aiforia) where we use supervised learning approaches and we work with academic collaborators (computer scientists) who generate custom algorithms and explore less-supervised or unsupervised approaches to classification.
AM: How important do you think the adoption of AI in pathology is? Are there any obstacles to overcome before it becomes more widely used?
GB: AI tools have immediate benefits for research and discovery applications where the two main tasks of the pathologist are 1) lesion detection (pattern recognition) and 2) severity comparisons across experimental groups (counting or grading). In research settings, these tasks are often performed “blinded” without knowledge of the groups to reduce bias (a benefit) but can have a drawback of keeping the pathology one step removed, thus reducing the pathologist’s ability to interpret the biological relevance of lesions in the correct context. With automated pattern recognition and quantification tools, I expect that pathologists will become more valuable contributors because we can focus on applying our broad expertise in disease manifestation to gain insight into the underlying mechanisms of lesions in context of the in vivo model under study.
The obstacles are the same as for any new technology: Resources (time, skills/knowledge, funds, hardware, software), fear of the unknown and our imperfect capacity to predict unintended negative consequences and resistance to change. None are insurmountable.
Ruairi Mackenzie (RM): What data was used to train the Aiforia software to identify features of TB in tissue?
Thomas Westerling-Bui (TW): Several different assays have been and are continuously developed. In order to identify tuberculosis granulomas and necrosis, the Aiforia science team used 42 whole slide histological images of TB-infected lung tissue. The histological slides contain thin slices of affected lung tissue and are stained with a common generic stain called Hematoxylin/Eosin or ‘HE’. This stain is used to visualize components of nuclei and cytoplasm and is commonly used in a large fraction of histological examinations. The data fed to the neural network architecture for training the AI model was 1-10 small regions per slide (~1-5% of total slide) annotated as training data. This is a very common average number for most projects. Deep convolutional neural networks are very good at identifying complex patterns and in medical imaging, especially histology, it’s more important to cover multiple cases (in this case slides) than providing lots of examples (pixels) from the same slide. This relates to the complexity of biology and the subtle small nuances present between slides and individuals. Dr Beamer continued this work using the deep learning platform we call Aiforia Create and was able to further train the AI model to identify a special class of macrophages with biological relevance for TB induced pathologies.
RM: How is Aiforia different from tools utilizing AI to identify features of cancer?
TW: Aiforia is an integrated platform intended for use by domain experts, such as Dr Beamer. The ability to use cloud computing clusters, world class AI architecture and a nimble user interface to rapidly and agilely deploy extremely complex AI models is what sets Aiforia apart from other alternatives. Many commercial providers as well as academic labs can deliver some of the individual components, but democratization and deploying AI in a worldwide scalable fashion requires a lot of components working together efficiently.
One of our missions at Aiforia is to provide that platform, in order to bring AI to users such as Dr Beamer while removing all barriers e.g. need for coders, supercomputer clusters etc. Interestingly, Aiforia is already being used as a pathology support tool in sub-Saharan African nations (Kenya and Tanzania) at multiple sites to employ AI models to pre-screen and to present suspicious findings to the clinician, who then can quickly determine the nature of the lesion. The ability for these clinics to upload their clinical sample images (produced by a small modular scanner manufactured by Grundium) to the Aiforia cloud for analysis allows them to forgo all the often challenging procurement and infrastructure issues underserved areas suffer from.
Furthermore, as this field evolves and moves forward, the local reliance of the end user on intermediate expertise such as data and computer scientists needs to be reduced and through Aiforia one can already achieve this and revolutionize their workflow into an AI version without the often presumed AI expertise needs.
In addition, the vast amounts of data involved require sophisticated code, not only for the sake of memory management in training of the AI modules, but also for the fast visualization of slides themselves, rendering of results and usability of annotations tools. Brightfield images can reach 15GB size at 20x magnification and multiple 10s of GB for multiplex immunofluorescence images.
Lastly, the most needed component is robust code, when these platforms enter the clinic their uptime must be close to 100%, a system that works sometimes is not acceptable and code maintenance becomes important. It has been well studied that academic code is often usable only within the first few years of publication and often even the code availability is questionable even if supposedly open access. These types of code projects are most often not tested according to industrial standards and have a myriad of ways of breaking e.g. from such a simple thing as system software updates, including various package dependencies. Putting all these capabilities together into a product has been one of the main focuses of Aiforia in the last few years, while maintaining industry and clinical quality code, so that stakeholders and the regulatory environment can trust, test and verify the product.
The most recent update to our public facing platform is an automated validation functionality which allows the fast and precise analytical validation in a remote, distributed fashion.
RM: What are the areas of scientific and medical practice you think AI can improve?
TW: AI systems stand to potentially improve every facet of scientific research and health care delivery. The most immediate scenarios where improvement can easily be achieved are quality assurance, speed of analysis and diagnosis and automatic screening. A lot of our discovery and diagnostic processes rely on precise pre-analytical factors and a significant number of samples fail these criteria. In some cases, this is noticed only at the expert end user stage and valuable time and resource is lost. For histopathological analysis we can automate this process and provide it upfront, saving both time and other resources, essentially to the level where the impact of so-called sample inadequacy or other QC fail becomes unnoticeable to the system. In addition, the ability to increase the speed and consistency of histopathological evaluations is already being used in research settings and soon coming to diagnostic use. These advances stand to make our systems more efficient and less error prone and should be embraced as soon as regulatory and safety requirements are met.
Down the road, we will be able to use AI ability to see what humans cannot. One such example of academic interest is described in Poplin et al., where a side result of the study was the ability of the AI model to determine the gender of the individual from retinal photographs. There are obviously more efficient and accurate ways to determine this, but this really exemplifies the idea of “Beyond human capability”.
Today’s treatment decisions are by and large by splitting the patient population in two or a few categories, e.g. no disease, low risk, high risk. It’s pretty obvious that the highest and lowest percentile in this split will be placed in the correct ‘bucket’ and get the appropriate treatment, however the boundaries between the dichotomous categories have a much lower confidence for correct classification. The reason we are in this situation is that it is hard to quantitate and evaluate on a continuous scale. Knowing it’s a small meal or big meal is easy, to exactly know the calorie amount and the effect on our health is much more difficult. In the same way clinical drug research is traditionally depending on categorical data. In order to move away from this and get a “true” value for everyone we need continuous data for everything and the data models that can analyze it. This must start from the RD space and gradually move into the diagnostic and treatment decision space. In the realm of histopathology the only way to achieve continuous data is automated image analysis and AI is providing the perfect capabilities for this.
Health R&D and healthcare delivery in 10 years will be vastly different thanks to more nuanced, quantitative and scalable systems and AI will play a large role in this transformation.
Gillian Beamer and Thomas Westerling-Bui were speaking to Anna MacDonald and Ruairi Mackenzie, Science Writers, Technology Networks.