We've updated our Privacy Policy to make it clearer how we use your personal data.

We use cookies to provide you with a better experience. You can read our Cookie Policy here.

Advertisement
Rectangle Image
Industry Insight

Advancing Skin Cancer Pathology With AI Image Analysis

Rectangle Image
Industry Insight

Advancing Skin Cancer Pathology With AI Image Analysis

Human metastatic melanoma cells stained with an H & E stain and magnified to 320x. Credit: Dr. Lance Liotta Laboratory

Image analysis can be a painstaking process for skin cancer pathologists. Workloads are increasing, and the number of qualified pathologists is shrinking. New technologies are needed to relieve the pressure. Proscia, a Philadelphia-based software company, think their DermAI software could be part of the solution. DermAI has recently been put to the test in a validation study that analyzed over 13,000 images. We spoke with Nathan Buchbinder, Proscia’s CPO, to find out more.

Anna MacDonald (AM): How is skin cancer traditionally diagnosed and what are some of the limitations of this approach?

Nathan Buchbinder (NB):
The traditional diagnosis of skin cancer is that you either go in for a regular routine visit to a dermatologist or you find that there’s something that’s abnormal on your skin and you go in as a consequence of that. The dermatologist finds something of interest, and they cut off a biopsy. That biopsy tissue goes to the pathology lab, it’s prepared and staged, put in a wax embedding, cut onto a thin glass slide that is then viewed under the microscope by a pathologist. The pathologist is looking to identify different patterns in the tissue that’s there. Those patterns are what they would use to make a diagnosis and give an indication of how bad something is, whether it requires treatment, if it’s cancerous or not cancerous, and then what kinds of treatments might work. They then send those results back to the dermatologist and to the patient.

Now the challenging thing is that the skin cancer pathologist is not doing this once or twice a day. There are some dermpaths (dermatopathologists) who are doing this 250, 350 times a day.

You can imagine that that’s a big challenge and part of the reason that we focused on skin cancer in particular is because it’s an area where there’s a big crisis on the supply and demand side. More biopsies than ever; rising at somewhere between 4% or 5% per year for skin cancer, and fewer pathologists than ever; decreasing by about 17% over the past decade, so it’s a big challenge for us to address.

AM: Could you tell us more about the validation study and the significance of its findings?

NB:
We saw this supply and demand challenge and what we broadly identified is that for the majority of these skin cases you could add a lot of value by helping the pathologists understand which are the cases they need to be looking at first versus which are the ones that they can deprioritize and look at a little bit later in the day. Which ones are going to require additional work? Which ones are the most significant or impactful diagnoses? The challenge historically has been that to do that you need a pathologist to review the specimen. So we instead identified that there’s an opportunity for artificial intelligence or machine learning to provide a pattern recognition application for these pathologists to pre-screen these biopsies, and then based off those pre-assessments take different actions on these cases, so that’s exactly what we built.

We worked with several of the largest academic and commercial laboratories in the US, some of the highest volume and best renowned specialists and practices for dermpath, to build an AI application that has the capacity to screen these cases in advance of pathologists’ review and provide an AI output that could then be used to triage and prioritize these cases. The tricky thing with any AI is that how well it performs has serious implications on how it gets used in practice, so we really wanted to make sure not only that it met our target accuracy, sensitivity and specificity thresholds, but we also wanted to make sure that the artificial intelligence application wouldn’t be confused when it was placed into a new lab which might use different stains, where tissue might look a little bit different, where the pathologist calls things a little bit differently and so the AI validation study that we did was one of the, if not the largest AI pathology study done to date.

It involved several commercial and academic labs, involved multiple imaging technologies, different hardware was used to capture the images of the biopsies for the AI to interpret, different staining protocols, different stains themselves, tissue from different places, different pathologists, different views, and what we found is that our AI application DermAI was able to achieve 98% accuracy overall, across all of these variations, so it’s a very big deal.

Ruairi Mackenzie (RM): DermAI has been shown in this study to classify images into one of four categories, but pathology is a hugely complex area. Are there ways to make something like DermAI even more granular to classify all the different features that could contribute to a diagnosis?

NB:
Absolutely. In skin cancer pathology alone, there are over 200 diagnoses that a pathologist might call. To answer the question directly – yes, we are confident and have already seen evidence that this AI has the capacity to get much more granular than just these four higher level classifications that we already demonstrated that we can provide. The other exciting thing about AI and these kinds of machine learning applications is that they can even provide information that goes beyond the diagnosis.

That’s where you start to be able to provide the pathologist and the patient, ultimately, with access to insights that you just couldn’t get with the interpretation of the biopsy and diagnosis from the pathologist. This is not something that DermAI does today, but it’s quite reasonable to assume that there are correlations between what’s in the tissue biopsy and five-year survival or likelihood of response to certain new therapies that are coming out. The challenge is that some of these correlations are just not natural for the human eye and mind to make. The pathologist doesn’t have access to thousands and thousands of cases with five-year follow up data, so it takes these kinds of applications to give that level of insight to the pathologist.

AM: In addition to improving this kind of diagnostic accuracy, what other impacts do you think that the adoption of AI can have on pathologists’ workloads?

NB:
I think it’s not even the accuracy that’s likely going to be most impactful for some of these high volume specialties, I would suggest that the impact that this has on productivity in the lab, on efficiency and quality overall is going to be enormous in how we diagnose diseases like cancer. Ultimately I would suggest that there’s always going to be the pathologist in the steering seat who is making the diagnosis and then translating that back to the dermatologist or to whatever specialist it is that’s treating the patient but the manner in which these slides move through the lab, which slides are interpreted by a pathologist, what information is presented to the pathologist, can really help drive productivity increases and efficiency gains not just for the individual pathologist but across the entire laboratory. That could make a big difference on turnaround time for diagnosis, which means the patient is waiting less time to receive their answer to the question, “Am I sick, and how sick am I?”

AM: What challenges have so far prevented this technology being adopted more widely already?

NB:
There are three challenges that I want to focus on. The first thing that used to be a challenge is regulatory, so the FDA has historically been a little bit slower in approving these whole slide image scanners, for routine clinical use, but what we’ve seen over the past two and a half years is that’s really no longer as big of an issue. They’ve now approved two full systems and the pathways to get approval for new companies is much clearer. The speed at which that can happen is much, much faster than what it was before, so that’s one challenge that’s been largely addressed.

The second key challenge has been demonstrating if there’s a return on investment. Now in the US certainly but even in other health care systems, it’s not sufficient to say that there’s a new cool technology that is the future. There has to be some evidence that shows that it’s of good economic value, there’s good business sense in the driving the adoption, especially when what we’re talking about is really a big transformation, a big process that needs to be managed by these labs which have historically not been the fastest adopters of technology. We believe that the digitization of pathology, in and of itself can provide value.

Some labs have seen anywhere between 13% and 22% efficiency gain, productivity boost, just from going digital alone, but we also believe that the real value comes when you layer these computational applications on top of that digitization, when you start to add this insight that comes from the AI applications or other image analysis applications that can help power these AI-enabled workflows.

The third challenge has been that the technology is not standardized. What that means is that if you buy one scanner type and you want to expand digitization in your lab, so you want to buy a second scanner type, there’s no guarantee that the software applications you use are going to be able to work on both. That’s actually part of the reason we conducted the study the way we did for DermAI, to demonstrate that we built an AI that’s generalizable, that can address all of the major sources of variation that you would see both in the laboratory and with the scanners themselves.

RM: Another challenge in this area is that the image sizes involved are huge, which can be a struggle for some AI systems. Is there a particular way that’s been overcome?

NB:
The size of the images becomes a challenge for two reasons. The first is, forget about AI application, just storing these images can get quite expensive. Now, the nice thing is that Moore’s law has largely solved that for us; storage has gotten cheaper to the point where you can leverage existing hospital infrastructure in most cases today to support most digitization. It’s the same story that happened in radiology as well.

Now for AI it does pose a bit of challenge as well. The images of the biopsy tissues are huge and so running an AI application across the entire image becomes very challenging. Part of what we’ve developed over the past couple of years is the methodology by which the machine learning application is able to identify the highest value regions in the image, identify the most impactful areas of pathology, manage all of this tissue and these tiles of the image to come up with a kind of single unified answer.

Not just in a way that’s accurate but in a way that performs at a speed that it can be used in the lab. An algorithm that takes 15 minutes to run and you have to run it 200 times in a given day is not feasible. It’s not a good approach to bringing new technology to market, so we are very consciously and acutely aware of the fact that this has to be an implementation-ready AI application, that these have to be of such a high quality and with such good performance that you can run this through a lab that might see 1000 or 2000 cases a day.

Nathan Buchbinder was speaking to Ruairi J Mackenzie and Anna MacDonald, Science Writers for Technology Networks. 

Meet The Authors
Anna MacDonald
Anna MacDonald
Science Writer
Ruairi J Mackenzie
Ruairi J Mackenzie
Senior Science Writer
Advertisement