We've updated our Privacy Policy to make it clearer how we use your personal data. We use cookies to provide you with a better experience. You can read our Cookie Policy here.

Advertisement

Diverse Applications of High-Content and High-Throughput Screening

Diverse Applications of High-Content and High-Throughput Screening content piece image
Listen with
Speechify
0:00
Register for free to listen to this article
Thank you. Listen to this article using the player above.

Want to listen to this article for FREE?

Complete the form below to unlock access to ALL audio articles.

Read time: 8 minutes

There has been much debate over the years about the relative virtues of phenotypic versus target-based screening. However, researchers in academia and industry would most likely agree that the best choice of screening technology very much depends on the questions you are trying to answer.

Take two adversaries currently placing a huge burden on humanity: COVID-19 and cancer. The immediate global effort to find drugs or vaccines effective against SARS-CoV-2, the virus that causes COVID-19, requires a completely different set of tools to those for tackling a complex disease like cancer. In this article, we use these two examples to illustrate the range of screening tools at our fingertips today.

Virtual screening for COVID-19 drug discovery


Huge global research efforts are currently focused on understanding the SARS-CoV-2 virus with a view to developing treatments or, ideally, a vaccine. However, when restricted access to the laboratory was enforced due to the outbreak earlier this year, Yu Wai Chen of Hong Kong Polytechnic University and his colleagues needed to rely on computer-based screening to quickly find potential drugs against the virus.1

“We set out to search for structure-based treatment options that can be immediately applied, taking advantage of the vast amount of drug research in response to the 2003 SARS outbreak,” explained Chen. “Using virtual screening accelerates the process by sidestepping the lengthy protein production and purification.” With the release of the SARS-CoV-2 genome sequence, its close relationship with the previous SARS coronavirus was revealed and, because the viruses share high protein sequence similarity, the team were confident about the quality of their predicted model.

They focused their efforts on the viral protein, 3C-like protease (3CLpro), which is required for replication. “We predicted its 3D structure based on its high sequence similarity (94% identity) with the SARS-CoV orthologue, but the biggest challenge was to model the side-chain conformations of those “mutated” (variant) residues. At present, no automated modeling software does this satisfactorily,” explained Chen. After a round of side-chain modeling, they checked and revised each variant residue of both A and B chains, making references to the electron densities of the template structure in the protein databank. “We knew these variant residues did not play critical roles in substrate binding or catalysis; yet we did not spare the effort in producing the best model with the most likely rotamer at each position.”

The team exclusively chose drugs that had already received regulatory approval for other indications as candidates, because the goal was to recommend therapeutic options that could be applied as quickly as possible. In mid-February, they published and recommended 16 drug candidates for repurposing against COVID-19. “Among these, the antivirals ledipasvir or velpatasvir (originally anti-hepatitis C virus drugs) are particularly attractive with minimal side effects,” says Chen. “However, we also noted that the drugs Epclusa (velpatasvir/sofosbuvir) and Harvoni (ledipasvir/sofosbuvir) could be very effective because they may have dual inhibitory actions on two viral enzymes. Using a drug with two targets may substantially reduce the probability of the virus developing resistance.”

To confirm the computational results, the next step is to perform in vitro studies. “We now plan to study the strength of binding and inhibition of our proposed candidates to the 3CLpro protein and verify the binding with structural studies. If these biochemical screening studies yield positive results, then we shall study the inhibition of the virus in cell and animal models. Hopefully, these results are enough to convince clinicians that the drug candidates are worth testing further.”

Recent Insights into COVID-19 Binding Epitopes

The novel coronavirus, COVID-19, has been declared a pandemic by the World Health Organization (WHO). As it spreads, researchers are mobilizing to understand the virus’s binding mechanisms as a first step in the development of a vaccine. In this application note, discover real-life examples of how bio-layer interferometry and biosensor technology has contributed to COVID-19 vaccine development, allowing researchers to assess virus-antibody interaction features.

Download App Note

High-content screening for complex biology


In contrast to the luxury of screening against a critical single target protein, many of the complex diseases we screen drugs against require a broader view of cell or tissue phenotype. This has led to a significant shift towards high-content screening.

At the University of Nottingham, UK, Tim Self leads the SLIM Imaging Facility and has seen an increased demand for high-content imaging. “For many years, we focused on high-throughput screens looking at second messenger signaling with intensity as a simple readout. These days, we can take multiple views of a well, under different light wavelengths and labeling different targets within a cell. The fact that this is now possible is down to several key advances.”

BOX 1: In a conventional (i.e. wide-field) fluorescence microscope, the entire specimen is flooded evenly in light from a light source. By contrast, point scanner confocal microscopy uses a pinhole to illuminate only a certain area of the sample. In spinning disc confocal microscopy, a disc with thousands of holes splits laser light onto your sample which is then collected back to the detector. This enables faster image acquisition than a point scanner confocal microscope. 


Most high-content imaging platforms work with fluorescence and used to be mostly wide-field systems (i.e. not confocal microscopy). In addition to improved camera technology that increased the quality, speed and sensitivity of the image acquisition, one of the main advances has been the innovation of LEDs so that you can have multiple excitation channels, explains Self. “They provide a little more excitation power with faster switching for multiple wavelength experiments and are somewhat cheaper to use. But they are still not as powerful as lasers – lasers are still a key component in many high-end imaging platforms. If you have a spinning disc (Box 1) with a laser this allows you to acquire images quickly and gently and avoid bleaching your sample and damaging it.”

The range of probes is also increasing across fluorescence, whereas previously this was lagging behind technology development. “There are many more clever probes that will change their fluorescence depending on the environment they’re in or what light source you put on them,” explains Self. “But another area we’re looking at is label-free imaging of live cells. This involves taking high-quality phase contrast images of your cell population so that you can follow them as they proliferate and migrate,” explained Self. This has been used to study wound healing, in scratch wound experiments or by placing spacers inside wells, removing them, and follow the proliferating cells as they move into that area.

One big trend in high-content imaging is the move towards 3D and 4D cultured cells – so looking at thicker samples and studying them over time. “Previously we could only work with single monolayers of cultured cells but now we want to look much deeper into samples – for example, if we are working with spheroids or biofilms – plate readers are now able to do that much more accurately and also reconstruct the data afterwards,” says Self.

Improving the Drug Discovery Tool Box: How to Achieve More Efficient Processing and Analysis of Screening Data

As drug discovery screening technologies continue to increase in both capability and complexity, so too does the amount of data to be processed. Scientists are tasked with finding ways to screen for potential problems with promising molecules at the earliest possible stage; achieving this comes from faster access to big data and the ability to integrate that data into an efficient workflow. Download this whitepaper to discover how you can achieve more efficient processing and analysis of screening data.

Download Whitepaper



Reconstruction of high-content screening data


This brings us to perhaps the most important advancement in high-content screening – the computational power needed to successfully analyze all of the data. This is well illustrated by the work being carried out in Steven Bagley’s lab at the Cancer Research UK Manchester Institute at Alderley Park, Cheshire, who are responsible for screening translational samples (e.g. patient samples or xenografts) for 45–50 research groups, many of whom now want to do high-content imaging of 3D samples over time and under different physiologically relevant conditions.

“Since about 2008, my mission has been to try and numerically describe the data as easily as possible for teams within the Institute. We struggled for a long time, because there weren’t many tools out there for doing high content 3D and 4D analysis.” Although computational power has increased and there are more analysis tools available, most of the packages that exist for 3D analysis often struggle doing batch analysis of all wells across all fields and joining up all that data, Bagley explains.

“If you consider that a 96-well is made up of 25 fields of view, and we want to stitch all of those together in 3D, and then do the analysis on that volume… computationally, that becomes extremely difficult, especially when there might be four channels of fluorescence as well. Doing that across 60 wells and then 30 plates is a big issue.”

One of the things they are exploring with the institute’s computational scientists is to develop an artificial intelligence (AI) platform that can carry out the phenotypic analysis. At Nottingham, they are also excited by the potential of this approach. “The advancement in machine learning and moving towards AI so that you can train your software to look for certain features has the potential to provide great advances in what we’re able to do,” says Self.

One of the clear advantages of using high-content imaging is that it removes the user bias that comes with traditional microscopy. “When I’m teaching students, I say that the reason high-content screening is so much better is that it removes user bias that creeps in if you’re personally selecting regions in a slide because you see a cell that fits the morphology you’re looking for,” says Self. This is especially true in cancer research, agrees Bagley, where they are looking at the tumor microenvironment. “It’s quite often the scrappy cells and those that aren’t pretty that will give you a lot more information. That’s why we need to be able to capture all of the cells in three dimensions and everything over time as well. It’s not just the tumor cells that are growing in 3D but the cells surrounding them that are important when it comes to studying treatments like immunotherapy, for example.”

Another shift that has naturally occurred as researchers shift to high-content screeningis throughput. “Rather than opt for what we call the “farming” of high content data and doing 384- and 1000+ well plates, we tend to stay below 96 because we need growing room and time for cells to interact. Some of the images use low magnification lenses to scan patient cells looking for circulating tumor cells and can acquire images from 60 wells in 20 or 30 minutes. In other cases there is a requirement to go down to 48 wells because some of the spheroids and organoids we grow are quite large.” In these scenarios, robotic arms and automated incubators allow for speedy analysis of multiple plates over five or six days and enable researchers to image processes such as the lifetime of drugs or organoid development. “We don’t want to just see a small snapshot in time or a beginning or an end, we want to see all the individual phases leading up to that final product as well, explains Bagley.

Advanced Flow Cytometry – Rapid Analysis for Complex Cell-based Models

Advances in our understanding of cell biology and assay technologies have resulted in the use of cell-based assays far earlier in the drug discovery workflow. This creates a significant challenge for labs; traditional cell-based approaches are not designed for the high throughput screening demands of early stage candidate identification. Labs therefore need to find new ways of working to deal with the needs of modern drug discovery pipelines. Download this whitepaper to discover a rapid, high-throughput solution for cell-based screening assays.

Download Whitepaper

Future needs


The gap in the market, says Bagley, is to multiplex as many labels as possible. “At the moment we’re stuck with using maybe four or five colors. Most of our samples are precious and it’s important to us to get as many labels in there as possible and ask complicated co-localization questions. Right now, there are not enough labels or lasers to fill the color palette. We’re exploring whether we can address this through alternative technology, better filtering, or different ways of collecting the data. Ultimately, our samples come from patients so we want to make the best use of them.”

Reference


1.       Chen YW, Yiu CB and Wong KY. (2020). Prediction of the SARS-CoV-2 (2019-nCoV) 3C-like protease (3CL pro) structure: virtual screening reveals velpatasvir, ledipasvir, and other drug repurposing candidates. F1000Res. DOI: 10.12688/f1000research.22457.2. eCollection 2020.