Without Scalability, CRISPR Will Not Realize Its Promise
Without Scalability, CRISPR Will Not Realize Its Promise
In the past decade, excitement around the promise of CRISPR-based gene editing has been widespread. It can be adopted for virtually any organism, making the approach useful for addressing a broad range of scientific challenges.
But the widespread enthusiasm we have seen should not be mistaken for equally widespread technical success. CRISPR has been a challenge to implement due to a range of limitations. As a community, we must set aside our excitement to take a clear-eyed look at the hurdles that need to be overcome before CRISPR can achieve the potential we believe it has.
Some laboratories have been remarkably effective at deploying CRISPR. These labs tend to be extremely well-funded, with a wealth of dedicated CRISPR expertise on staff. In most labs, implementing CRISPR is a struggle — and all too often, one that is never resolved. Standing up high-capacity CRISPR pipelines requires skill and expertise in the design process used to build guide RNAs, nucleases, and cassettes. Trial-and-error approaches are far more common than rigorous, robust experimental protocols.
Hit-or-miss techniques aren’t the only problem. Carefully tracking and monitoring edits made is critical for drawing conclusions from CRISPR-based experiments. But reliable trackability becomes less feasible as scientists push boundaries in the complexity of edits, or in multiplex combinatorial editing. When complex edits are done in many iterative cycles over time, tracking data for each and every edit is virtually impossible to maintain. However, this information is essential in order to unlock the full potential of CRISPR editing for data driven genome discovery and engineering. When it comes to applications in human health and industrial biotechnology alike, the importance of accurate tracking cannot be overstated.
In addition to design and tracking challenges, I believe the most serious problem we face in today’s CRISPR workflows is that of scale. Fussy techniques and unreliable protocols are simply not amenable to the dramatic increase in throughput and editing scale that are needed. Failing to overcome today’s limited scalability will make it impossible to use this editing tool as we imagine to solve some of the greatest challenges facing our planet and our population.
As biological experiments and studies have scaled up in recent decades, thanks largely to advances in biological engineering and systems biology, scientists are confronted with virtually unlimited scenarios that must be tested to rapidly discover better solutions. Lacking fluency in Nature’s rules, we are ill-equipped to engineer biology from first principles as we would in just about any other engineering discipline. Instead, we must perform empirical exploration of an enormous design space at remarkably high throughput.
Today, it is neither feasible nor affordable to intervene in a genome at, say, tens of thousands of loci with anything more sophisticated than limited-utility base editors or simplistic knockout screens. As a result, we test a small subset of possibilities, hoping that an ideal result just happens to fall within this narrow scope. That’s gambling, not science.
Let’s look at how this plays out in common lab experiments.
- Example #1: You’ve reviewed the scientific literature and databases, amassing a list of several thousand relevant mutations of varying size. To determine which ones might be causal for the trait or phenotype you’re studying, you need to recapitulate these mutations on a clean background. This is currently not possible. Even labs with deep funding sources wouldn’t be able to test more than a few hundred of these, and it would take considerable cost and time.
- Example #2: Metabolic engineers are looking to perform large-scale experiments with protein mutagenesis. This would require implementing several thousand amino acid changes at many genomic loci associated with a specific trait. But with conventional tools, scientists can target only a few locations.
There are countless other examples, but these give a quick illustration of how limited we are with editing experiments today. Allowing CRISPR to scale to meet these common needs would require dramatic increases in the size, number, types, and combinations of edits that can be made. The community needs reliable, workhorse methods that can produce high edit rates at tremendous scale, as well as the ability to introduce multiplex combinatorial edits for establishing sequence/function relationships. Editing in combinations across many iterative cycles is critical for protein engineers as well as plant and animal breeders.
CRISPR is powering a very important shift in genome biology, from an observational to an interventional discipline that can elucidate causality. Altering systems at the genomic level and analyzing the functional effects will significantly expand our knowledge of biology, offering key insights that will allow us to boost crop yields, improve human and animal health, and manufacture economically important materials in a more efficient and sustainable manner.
But we will not be able to achieve those outcomes without major improvements to CRISPR techniques. We must work together to harden and automate the processes required for CRISPR editing workflows, to develop new tools for design and trackability, and to incorporate innovations from informatics and machine learning to mine large-scale data and feed it back into the pipeline for better edits in future experiments. I believe all of these advances are possible, and that together they will enable entirely new achievements in genome engineering.
Richard Fox, PhD, is Executive Director of Data Science at Inscripta, Inc.