“With three primary network technology options widely available, each with advantages and disadvantages in specific workload scenarios, the choice of solution partner that can deliver the full range of choices together with the expertise and support to match technology solution to business requirement becomes paramount.”
The TOP500 list is a very good proxy for how different interconnect technologies are being adopted for the most demanding workloads, which is a useful leading indicator for enterprise adoption. The essential takeaway is that the world’s leading and most esoteric systems are currently dominated by vendor specific technologies. The Open Fabrics Alliance (OFA) will be increasingly important in the coming years as a forum to bring together the leading high performance interconnect vendors and technologies to deliver a unified, cross-platform, transport-independent software stack.
Today, high performance interconnects can be divided into three categories: Ethernet, InfiniBand, and vendor specific interconnects. Ethernet is established as the dominant low level interconnect standard for mainstream commercial computing requirements. InfiniBand originated in 1999 to specifically address workload requirements that were not adequately addressed by Ethernet, and vendor specific technologies frequently have a time to market (and therefore performance) advantage over standardized offerings.
A survey conducted by insideHPC and Gabriel Consulting in Q4 of 2105 indicated that nearly 45% of HPC and large enterprise customers would spend more on system interconnects and I/O in 2016, with 40% maintaining spending at the same level as the prior year. For manufacturing, the largest subset representing approximately one third of the respondents, over 60% were planning to spend more and almost 30% maintaining the same level of spending going into 2016 implying the critical value of high performance interconnects.
SGI’s Data Management Framework (DMF) software – when used within personalized medicine applications – provides a large-scale, storage virtualization and tiered data management platform specifically engineered to administer the billions of files and petabytes of structured and unstructured fixed content generated by highly scalable and extremely dynamic life sciences applications.
In life sciences, perhaps more than any other HPC discipline, simplicity is key. The SGI solution meets this requirement by delivering a single system that scales to huge capabilities by unifying compute, memory, and storage. Researchers and scientists in personalized medicine (and most life sciences) are typically not computer science experts and want a simple development and usage model that enables them to focus on their research and projects.
A workflow to support genomic sequencing requires a collaborative effort between many research groups and a process from initial sampling to final analysis. Learn the 4 steps involved in pre-processing.
If the keys to health, longevity, and a better overall quality of life are encoded in our individual genetic make-up then few advances in the history of medicine can match the significance and potential impact of the Human Genome Project. Instigated in 1985 and since that time, the race has been centered on dramatically improving the breadth and depth of genomic understanding as well as reducing the costs involved in sequencing, storing, and processing an individual’s genomic information.