In this Graybeards Podcast, Molly Rector from DDN describes how HPC storage technologies are mainstreaming into the enterprise space. “In HPC there are 1000s of compute cores that are crunching on PB of data. For Oil&Gas companies, it’s seismic and wellhead analysis; with bio-informatics it’s genomic/proteomic analysis; and with financial services, it’s economic modeling/backtesting trading strategies. For today’s enterprises such as retailers, it’s customer activity analytics; for manufacturers, it’s machine sensor/log analysis; and for banks/financial institutions, it’s credit/financial viability assessments. Enterprise IT might not have 1000s of cores at their disposal just yet, but it’s not far off. Molly thinks one way to help enterprise IT is to provide a SuperComputer as a service (ScaaS?) offering, where top 10 supercomputers can be rented out by the hour, sort of like a supercomputing compute/data cloud.”
Today Ellexus in the UK announced the release of Mistral, a “ground breaking” product for balancing shared storage across a high performance computing cluster. Developed in collaboration with ARM’s IT department, Mistral monitors application IO and cluster performance so that jobs exceeding the expected IO thresholds can be automatically identified and slowed down through IO throttling.
“DDN’s IME14K revolutionizes how information is saved and accessed by compute. IME software allows data to reside next to compute in a very fast, shared pool of non-volatile memory (NVM). This new data adjacency significantly reduces latency by allowing IME software’s revolutionary, fast data communication layer to pass data without the file locking contention inherent in today’s parallel file systems.”
ICER at Michigan State is seeking an Information Technologist in our Job of the Week. “As a joint appointment between Michigan State University’s Information Technology Services and the Institute for Cyber-Enabled Research, the storage server administers computer storage clusters totaling a few nodes, including high speed Ethernet network interconnections. The position will involve Linux systems administration and working in a team environment with systems administrators, programmers, and research specialists to support the university’s research computing needs; will deploy and test new systems and services; will monitor, diagnose, support, and upgrade existing services (using the technologies described in the ‘Desired Qualifications’ section); will work with staff to document internal and external procedures; will develop, expand, and implement tools and scripts to facilitate administration ; will work with users on how to use object-oriented Ceph-based systems.”
“If you think of a data mart as a store of bottled water – cleansed and packaged and structured for easy consumption – the data lake is a large body of water in a more natural state. The contents of the data lake stream in from a source to fill the lake, and various users of the lake can come to examine, dive in, or take samples.” These “data lake” systems will hold massive amounts of data and be accessible through file and web interfaces. Data protection for data lakes will consist of replicas and will not require backup since the data is not updated. Erasure coding will be used to protect large data sets and enable fast recovery. Open source will be used to reduce licensing costs and compute systems will be optimized for map reduce analytics. Automated tiering will be employed for performance and long-term retention requirements. Cold storage, storage that will not require power for long-term retention, will be introduced in the form of tape or optical media.”
Today Atos announced that the French CEA and its industrial partners at the Centre for Computing Research and Technology, CCRT, have invested in a new 1.4 petaflop Bull supercomputer. “Three times more powerful than the current computer at CCRT, the new system will be installed in the CEA’s Very Large Computing Centre in Bruyères-le-Châtel, France, mid-2016 to cover expanding industrial needs. Named COBALT, the new Intel Xeon-based supercomputer will be powered by over 32,000 compute cores and storage capacity of 2.5 Petabytes with a throughput of 60 GB/s.”
In this video from the DDN booth at SC15, Dr. Erik Deumens of the University of Florida describes why unpredictable and less standard architectures and system configurations are necessary to meet the agility, availability and responsiveness requirements to meet the mission of innovation and exploration. “The University of Florida’s Interdisciplinary Center for Biotechnology Research (ICBR) offers access to cutting-edge technologies designed to enable university faculty, staff and students, as well as research and commercial partners worldwide with the tools and resources needed to advance scientific research.”
Oil and gas exploration is always a challenging endeavor, and with today’s large risks and rewards, optimizing the process is of critical importance. A whole range of High Performance Computing (HPC) technologies need to be employed for fast and accurate decision making. This Intersect360 Research whitepaper, Seismic Processing Places High Demand on Storage, is an excellent summary of the challenges and solutions that are being address by storage solutions from Seagate.
Today DDN announced that its WOS 360 v2.0 object storage software was named a Visionary Product in the Professional Class Storage category at the fifteenth Annual Storage Visions Conference. The groundbreaking WOS enables organizations to build highly reliable, infinitely scalable and cost-efficient storage repositories to meet any unstructured data need and the most demanding storage requirements. With massively scalable storage technology that is able to outpace the performance requirements and growth of Enterprise Big Data, DDN continues to lead the market with revolutionary products that solve the end-to-end data lifecycle from cache and SSD to high performance file storage, cloud and archive.
In this video, Roger Goff from DDN describes how the company’s storage solutions have evolved to address the changing demands and requirements of HPC from compute all the way throughout the entire data lifecycle. “Organizations leverage the power of DDN technology and the deep technical expertise of our team to capture, store, process, analyze, collaborate and distribute data, information and content at largest scale in the most efficient, reliable and cost effective manner. Our customers include many of the world’s leading financial services firms and banks, healthcare and life science organizations, manufacturing and energy companies, government and research facilities, and web and cloud service providers.”