Next-generation sequencing (NGS) tools produce vast quantities of genetic data which poses a growing number of challenges to life sciences organizations. Accelerating analytics, providing adequate storage and memory capacity, speeding time-to-solution, and reducing costs are major concerns for IT department operating on traditional computing systems. In this week’s Sponsored Post, Bill Mannel, Vice President & General Manager of HPC Segment Solutions and Apollo Servers, Data Center Infrastructure Group at Hewlett Packard Enterprise, explains how next-generation sequencing is altering the patient care landscape.
“Atos is determined to solve the technical challenges that arise in life sciences projects, to help scientists to focus on making breakthroughs and forget about technicalities. We know that one size doesn’t fit all and that is the reason why we studied carefully The Pirbright Institute’s challenges to design a customized and unique architecture. It is a pleasure for us to work with Pirbright and to contribute in some way to reduce the impact of viral diseases”, says Natalia Jiménez, WW Life Sciences lead at Atos.
Researchers at the Earlham Institute (EI), The Sainsbury Laboratory (TSL) and the James Hutton Institute, have found a new way to decipher these large stretches of DNA to discover and annotate pathogen resistance in plants. “Using the PacBio, which can read longer stretches of DNA in their entirety, along with the developed NB-LRR gene workflow “RenSeq” (Resistance gene enrichment sequencing), the data not only targets R genes, but also the important regulatory regions of DNA – promoters and terminators that signal when to start making a protein and when to stop.”
A workflow to support genomic sequencing requires a collaborative effort between many research groups and a process from initial sampling to final analysis. Learn the 4 steps involved in pre-processing.
If the keys to health, longevity, and a better overall quality of life are encoded in our individual genetic make-up then few advances in the history of medicine can match the significance and potential impact of the Human Genome Project. Instigated in 1985 and since that time, the race has been centered on dramatically improving the breadth and depth of genomic understanding as well as reducing the costs involved in sequencing, storing, and processing an individual’s genomic information.
“Unchecked data growth and data sprawl are having a profound impact on life science workflows. As data volumes continue to grow, researchers and IT leaders face increasingly difficult decisions about how to manage this data yet keep the storage budget in check. Learn how these challenges can be overcome through active data management and leveraging cloud technology. The concepts will be applied to an example architecture that supports both genomic and bioimaging workflows.”
There are times when a convergence of technologies happens that can benefit a very large number of humans in order to improve their well-being. A number of technological innovations are coming together that can greatly enhance the recovery from life-threatening illnesses and prolong and improve the quality of life. With a combination of faster and more accurate genomics sequencing, faster computer systems and new algorithms, the movement of discovering what medicine will work best on individual patients has moved from research institutions to bedside doctors. Physicians and other healthcare providers now have better, faster, and more accurate tools and data to determine optimal treatment plans based on more patient data. This is especially true for pediatric cancer patients. These fast-moving technologies have become the center of a national effort to help millions of people overcome certain diseases.
The University of Illinois at Urbana-Champaign is leading three new centers of innovation funded through the National Science Foundation’s Industry/University Cooperative Research Centers (I/UCRC) program.
“The data that I presented from the Sanger Institute is typical of the profiles that we come across: a mix of good streaming IO (ie the larger reads), but unexpectedly high numbers of small reads and writes. These small reads and writes are potentially harmful to the file system. We’ve profiled HPC applications in various different life sciences organizations, not just the Sanger Institute, and we’ve found these IO patterns throughout. We’ve also seen similar IO patterns in EDA and oil and gas applications.”
Today IBM and the University of Calgary announced a five-year collaboration to accelerate and expand genomic research into common childhood conditions such as autism, congenital diseases and the many unknown causes of illness. As part of the collaboration, IBM will augment the existing research capacity at the Cumming School of Medicine’s Alberta Children’s Hospital Research Institute by installing a POWER8-based computing and storage infrastructure along with advanced analytics and cognitive computing software.