Besides the obvious benefits to individuals, who will receive more targeted diagnosis and treatment, organizations that implement or contribute to personalized medicine can expect a number of benefits.
The Open Compute Project Foundation was created to design the most efficient server, storage and related designs for the next generation of data centers in an open and collaborative development model. By sharing designs that maximize density, minimize power consumption and deliver expected performance, completely new computing environments can be developed, free from the limitations of legacy thinking.
In the late 1980s, genomic sequencing began to shift from wet lab work to a computationally intensive science; by end of the 1990s this trend was in full swing. The application of computer science and high performance computing (HPC) to these biological problems became the normal mode of operation for many molecular biologists.
The Open Compute Project, initiated by Facebook as a way to increase computing power while lowering associated costs with hyper-scale computing, has gained a significant industry following. This guide to Open Computing is design to help organizations optimize their HPC environment to achieve higher performance at a lower operating cost.
Advances in computational biology as applied to NGS workflows have led to an explosion of sequencing data. All that data has to be sequenced, transformed, analyzed, and stored. The machines capable of performing these computations at one point cost millions of dollars, but today the price tag has dropped into the hundreds of thousands of dollars range.
Clusters that are purchased for specific applications tend not to be flexible as workloads change. What is needed is an infrastructure that can expand or contract as the workload changes. IBM, a recognized leader in High Performance Computing is applying its expertise in both HPC and Cloud computing to bring together the technologies to create the HPC Cloud.
Demands by users that are running applications in the scientific, technical, financial or research areas can easily outstrip the capabilities of in-house clusters of servers. IT departments have to anticipate compute and storage needs for their most demanding users, which can lead to extra spending on both CAPEX and OPEX once the workload changes.
The term next generation sequencing (NGS) is really a misnomer. NGS implies a single methodology, but the fact is that over the past 10 to 15 years there have been multiple generations and the end is nowhere in sight. Technological advances in the field are continuing to emerge at a record setting pace.