Natural Sciences and Engineering Research Council of Canada Funds Big Data Software Tools with $7.3M in Development Funding
News May 01, 2014
Current advanced technologies for genetic analysis have created almost unimaginable amounts of data, measured in petabytes – a million billion bytes. Genomic researchers are keen to analyze these data and identify genetic clues that could point to new ways to prevent or cure cancer. Such an effort, however, requires thousands of high-performance computers working in tandem, along with the yet-unavailable software tools that can coordinate such a daunting and complex exercise.
The Canadian federal government says it is providing $7.3 million in funding for a collaboration – both in Canada and internationally – that will develop tools to effectively manipulate vast amounts of data to help find cures for cancer.
Funded through the Natural Sciences and Engineering Research Council of Canada (NSERC)’s Discovery Frontiers, the project will develop powerful new computing tools, so that researchers can analyze genetic data from thousands of cancers to learn more about how cancers develop, and which treatments work best. At the heart of the project will be a new cloud computing facility, the Cancer Genome Collaboratory, capable of processing genetic profiles collected by the International Cancer Genome Consortium (ICGC) from cancers in some 25,000 patients around the world. The powerful new data-mining tools are expected to be available in 2015 for beta testing by selected cancer genomics and privacy researchers. The facility is planned to be opened to the broader research community in 2016. Researchers will be able to formulate questions about cancer risk, tumour growth, and drug treatments, and extract an analysis against the data.
The collaboration initiated by NSERC also includes federal granting organizations Genome Canada, the Canada Foundation for Innovation (CFI), and the Canadian Institutes of Health Research (CIHR). The partners are providing a total of $7.3 million to the project.
The University of Chicago is also providing key computing resources for the project. In addition, a large initial donation of genomic data will come from the International Cancer Genome Consortium, which is based in Toronto and brings together researchers from some 16 jurisdictions around the world. The university is providing $500,000 to the project.
The International Cancer Genome Consortium is the largest worldwide coordinated effort to produce a catalogue of genetic structure of cancer organisms. Its 10-year goal is to characterize the genetic materials from tumours in 500 patients for each of the major cancer types.
The collaborators say the project will set up a unique cloud computing facility which will enable research on the world’s largest and most comprehensive cancer genome dataset. Using the facilities of the Cancer Genome Collaboratory, researchers will be able run complex data mining and analysis operations across 10 to 15 petabytes of cancer genome sequences and their associated donor clinical information.
Using advanced metadata tagging, provenance tracking, and workflow management software, researchers will be able to execute complex analytic pipelines, create reproducible traces of each computational step, and share methods and results. This represents a fundamental reversal in the current practice of genome analysis. Rather than requiring researchers to spend weeks downloading hundreds of terabytes of data from a central repository before computations can begin, researchers will upload their analytic software into the Collaboratory cloud, run it, and download the compiled results in a secure fashion.
Since the genetic data used in the Collaboratory is so detailed as to permit personal identification, privacy issues are central to the project’s design. A special team of computer scientists will investigate ways to guard the privacy of everyone whose data are analyzed. These will include techniques to make genetic profiles anonymous without the loss of details that would render the profiles overly vague, and techniques to structure queries from health researchers so they can be processed via secure data storage sites.
“Canada and many other nations around the world have already invested tremendous resources in sequencing of thousands of cancer genomes, but until now there has been no viable long-term plan for storing the raw sequencing data in a form that can be easily accessed by the research community,” says Lincoln Stein, director, informatics and bio-computing program, Ontario Institute for Cancer Research, and professor, Department of Molecular Genetics, University of Toronto. “The Cancer Genome Collaboratory will open this incredibly important data set to researchers from laboratories large and small, enabling them to achieve new insights into the causes of cancer and to develop innovative new ways to diagnose and manage the disease.”
Neural Computer Hears Like HumansNews
Modelling the human senses is an incredibly complex task. Our brains arrange cells into complex hierarchies that process information from our surroundings. Now, a group at MIT have created a model of the human auditory cortex that can hear sounds and music in the same way that humans do.READ MORE
Stable Beta-Amyloid Dimers Identified in Alzheimer’s BrainsNews
A recent study exploited state-of-the-art mass spectrometry to provide the first direct evidence of beta-amyloid dimers in patients with Alzheimer’s disease and points to the potential of these molecules as biomarkers. Beta-amyloid dimers may be the smallest pathological species that trigger Alzheimer’s disease.
Deceased Data: Should Your Online Remains Be Treated Like Physical Remains?News
In 2018, your death doesn't just leave behind a body. It leaves behind a huge online footprint - your digital remains. A new study asks whether these remains should be treated and respected as a corpse would, and whether museums may provide a blueprint for the preservation process.READ MORE