We've updated our Privacy Policy to make it clearer how we use your personal data.

We use cookies to provide you with a better experience. You can read our Cookie Policy here.

Advertisement
Big Genetics in BC: The American Society for Human Genetics 2016 Meeting
Article

Big Genetics in BC: The American Society for Human Genetics 2016 Meeting

Big Genetics in BC: The American Society for Human Genetics 2016 Meeting
Article

Big Genetics in BC: The American Society for Human Genetics 2016 Meeting

Read time:
 

Want a FREE PDF version of This Article?

Complete the form below and we will email you a PDF version of "Big Genetics in BC: The American Society for Human Genetics 2016 Meeting"

First Name*
Last Name*
Email Address*
Country*
Company Type*
Job Function*
Would you like to receive further email communication from Technology Networks?

Technology Networks Ltd. needs the contact information you provide to us to contact you about our products and services. You may unsubscribe from these communications at any time. For information on how to unsubscribe, as well as our privacy practices and commitment to protecting your privacy, check out our Privacy Policy


 


Selected Vendors


featured product

Agilent OneSeq Target Enrichment offers a simple, streamlined, cost-effective solution for detecting single gene mutations, CNVs, and aneuploidies in one assay. Compared to WGS, which is costly and generates overwhelming amounts of data, OneSeq is the ideal choice for a more cost-effective and targeted solution. 


More Information




Your next generation platform for nucleic acid QC

The NEW DropSense96 has increased sensitivity and new broad-use cDrop apps for accurate DNA/RNA quantification and contamination identification in one shot. Eliminate the costly & tedious fluorescent assay to quantify DNA/RNA and get a good view on inhibitors present in your samples. Speed-up QC and reduce your retesting rates!


"We are very proud of our new DropSense96. Redesigning our flagship platform opens new opportunities and features making it the new standard in sample prep QC "

Tony Montoye, Head of Marketing & Applications, Trinean


More Information


Josh P. Roberts

The American Society for Human Genetics annual meeting (ASHG2016), held last month in Vancouver, British Columbia, was a big meeting. Big not only in that almost 6,500 scientists and clinicians from 66 countries came to hear, see, and give nearly 3,500 oral and poster presentations. But big in that it was largely about big things: big consortia making use of big sample sizes to accumulate big data in big databases that helps to generate and test big ideas.
 
Much of the meeting had to do, at least in part, with data generation and mining, as well as how it is verified and validated, organized, analyzed, and shared. It looked at biases that can be avoided and unavoidable biases that researchers need to be cognizant of. 
 
Of course, there was plenty of more traditional single gene analysis on tap as well, along with a host of presentations about how to translate laboratory findings into actionable clinical results, as well as how the science impacts both the medical arena beyond the clinic as well as society. Talks and posters covered nearly every subject matter imaginable – not only the sexy ones like cancer, cardiovascular disease, neurological disorders, and emerging pathogens (like the Zika virus) – from obesity, human migration and eye syndromes, to orphan diseases and forensics, let alone the tools and methodologies to study them. 

Big Data

There is a trend away from exploration of a single or few genes toward studying large panels of genes, entire exomes (genetic protein-coding regions) and in many cases, entire genomes at a time . It is difficult to know whether this is simply because most of the low-hanging fruit has already been picked, or whether technologies enabling such endeavors have become that much more accessible, or even something as cynical as that’s where the funding is. Probably all are at play. 
 
While the analysis of a single genome may tell us about a single patient, that will generally work only if there is a reference genome to compare it to. That reference is increasingly made up of large cohorts of individuals, and oftentimes amalgams of cohorts. 

Two of the most discussed announcements at the meeting were about just such cohorts.

Scores of presentations were based, in whole or in part, on exome data from the Exome Aggregation Consortium (ExAC), released in 2014, which “spans 60,706 unrelated individuals sequenced as part of various disease-specific and population genetic studies.” Daniel MacArthur, Ph.D., Co-Director of Medical and Population Genetics at the Broad Institute of Harvard and MIT, the principle driver behind ExAC, announced that they had doubled the amount of exome data contained in the database. He also announced the public release of the Genome Aggregation Database (gnomAD), which adds more than 15,000 whole genome sequences to the expanded ExAC dataset, and to which more than 5,000 principal investigators have provided data. 

Illumina makes the instruments on which the vast majority of genetic sequences are generated. In addition to supplying sequencers to third parties, it also operates its own CLIA-certified laboratory where it does pre-dispositional and undiagnosed disease testing. The company announced to the meeting that it had deposited more than 95,000 clinical variants into the open-access ClinVar – the largest single contribution to the National Center for Biotechnology Information (NCBI)- operated clinical variants database -- and will continue to do so as they become available. 

“For them to release those into the public domain makes me very happy,” said Chris Gunter, Ph.D., associate professor of Pediatrics and Human Genetics at Emory University School of Medicine. Gunter noticed that a number of people on Twitter were celebrating this, while publicly naming and shaming other large labs that have yet to make their datasets freely available: “That’s crowd-sourced public pressure there!” 

Public databases like ExAC (now gnomAD) and ClinVar have become so important that when Gunter reviews a paper, she will not recommend it for publication unless the authors have checked their variants against them.

Many other big data projects, or the results of using them, were touted as well. Among these were the National Heart, Lung, and Blood Institute’s Trans-Omics for Precision Medicine (TOPMed) program; the International Genome Sample Resource’s 1000 Genomes Project (at one time the largest public catalog of human variation and genotype data); The Broad’s Genotype-Tissue Expression (GTEx) Project; the Japanese Genotype-phenotype Archive (JGA), the European Genome-phenome Archive (EGA), and the NCBI’s database of genotypes and phenotypes (dbGaP); and the self-explanatory Chinese Million-ome Project. 

The results of other disease- or disease area- specific projects generating and/or interpreting large datasets were also on prominent display. For example, the VISTA Cardiac Enhancer Browser allows for a search of about 100,000 putative human heart enhancers. The International Genetics and Translational Research in Transplantation Network (iGeneTRAIN) provides coverage of approximately 782,000 transplantation markers. And the International MS Genetics Consortium (IMSGC) announced the identification of 200 genetic loci associated with multiple sclerosis. “We have a very detailed map of what is important, what is not important,” says Nikolaos Patsopoulos, M.D., Ph.D., associate professor at Harvard University Medical Center, who led the analysis. “Now we can say we can explain the genetic basis of more than 50%.”

Electronic Health Records

Studies using patient records to correlate disease with medical and other data is not new – witness the Rochester Epidemiology Project, founded a half-century ago by the Mayo Clinic and still going strong. Yet digitized electronic health records have allowed researchers access to incredible sample sizes and huge amounts of data on clinical phenotypes and features of patients as they encounter the health care system -- ranging from demographics, medications prescribed, allergies, immunizations, lab results, and radiological findings, all the way to billing information.

A search for “EHR" in the ASHG2016 online program yields no fewer than 37 posters and seven platform sessions, plus the opening plenary talk, titled “Disease heritability estimates using the electronic health records of 9 million patients.” The latter tackled the difficulty of discerning family relationships from EHRs. The authors developed an algorithm to extract family data from emergency contact information, allowing them to triangulate relationships that are not explicit. 

Both proprietary and public algorithms played a prominent role in much of the science on display at ASHG2016, whether as the focus of the presentation, exhibit hall showcase or more behind the scenes in developing a platform or model to interpret data. 

Tools and Techniques

Many tools and techniques beyond (or perhaps as a result of, or in conjunction with) the collection and analysis of big data were behind the science. There were, of course, the next-generation sequencing (NGS) and other instruments, custom and off-the-shelf arrays, and a myriad of other capital equipment and consumables. Several platforms including 10X Genomics, PacBio, and Oxford Nanotechnologies, help to facilitate longer DNA reads, which “allows you to show haplotype,” notes Gunter. “If you know that variants are on the same piece of DNA then you can establish which variants are traveling together, and which came from one parent versus the other.” 

10X extoled the virtues of its Chromium System to tackle single cell gene expression to ensure that bulk measurements don’t mask biologically relevant signals from, for example, individual tumor cells. Meanwhile Bio-Rad and Illumina illustrated their co-developed droplet digital PCR (ddPCR)-based Single Cell Sequencing Solution workflow. 

CRISPR/Cas9-based gene editing, while not new, has become a mainstay of biological work to the point that it’s one of Gunter’s must-do-to-publish techniques. CRISPR/Cas9’s utility goes beyond knocking out (or mutating) a single gene to assure that a given function is lost (or gained). The meeting was chock-full of sessions, individual talks, and posters on the use of CRISPR/Cas9, including a plenary talk describing the results of a multiplexed screen using programmed guide pairs to discover and dissect the regulatory elements of the HPRT1 Mendelian disease gene.

There was a discussion on the use of CRISPR/Cas9 editing in patients. The ASHG2016 Program Committee “really tried to mix in basic science with clinical genetics and social issues around a central question,” said chair Anthony Antonellis, an associate professor of Human Genetics and Neurology at the University of Michigan. These latter issues are commonly referred to as ethical, legal and social implications (ELSI). 

Other examples of such mixing included several talks around sequencing healthy newborns, and children with cancer, and returning results to the families. It turns out that there is far more reluctance to participate among the former than among the latter: “Parents who have gone through something as traumatic as having a child with cancer are desperate for getting any information,” says Janet Malek, Ph.D., associate professor in the Center for Medical Ethics and Health Policy at Baylor University, who led the ELSI part of the study. 

The utility of routine newborn and patient screening to physicians, patients, payers, and society as a whole was also the subject of numerous presentations. And “as pre-dispositional sequencing becomes more available, cheaper, and of better quality, many people will elect it,” says Robert Green, M.D., M.P.H., director of the Genomes2People Research Program of Brigham and Women’s Hospital, and principle investigator of the BabySeq Project, among others. “We still need to understand whether it has benefits or harms, and what those are.”

Precision Medicine

The implicit or explicit goal of much of the science at ASHG2016 – even that seen as basic research -- was to determine specific genetic causes that can ultimately lead to tailored treatments for individual patients.

“We’re really in an age now where we have these amazing technologies to collect huge amounts of data. We’re reaching a point where we can really stand back and ask, ‘What do we gain from attaining all this information?’,” says Antonellis. “Can we look into all of those data and really refine, for example, disease phenotypes? Can we find sub-phenotypes that may be clustering with some genetic explanation and others that have different genetic explanations?”

References

ihttp://exac.broadinstitute.org/about

Josh P. Roberts is a freelance writer living in Minneapolis, USA
Advertisement