“Thanks to the arrival of SSDs, the performance of storage systems can be boosted by orders of magnitude. While a considerable amount of software engineering has been invested in the past to circumvent the limitations of rotating media, there is a misbelief than a lightweight software approach may be sufficient for taking advantage of solid state media. Taking the data protection as an example, this talk will present some of the limitations of current storage software stacks. We will then discuss how this unfold to a more radical re-design of the software architecture and ultimately is making a case for an I/O interception layer.”
Seagate Technology and Los Alamos National Laboratory are researching a new storage tier to enable massive data archiving for supercomputing. The joint effort is aimed at determining innovative new ways to keep massive amounts of stored data available for rapid access, while also minimizing power consumption and improving the quality of data-driven research. Under a Cooperative Research and Development Agreement, Seagate and Los Alamos are working together on power-managed disk and software solutions for deep data archiving, which represents one of the biggest challenges faced by organizations that must juggle increasingly massive amounts of data using very little additional energy.
Over at Enterprise Storage Forum, Henry Newman looks at why we should focus on how much work gets done rather than specifications as disk drives and SSDs get faster and faster. This is not a new rant for Henry, and in fact the importance of workflow over bandwidth or IOPs is the main theme at this year’s Mass Storage Systems and Technology Conference (MSST) coming up in May.
In this video, Al Roker from the Today Show looks at how Cray XC30 supercomputers give ECMWF more accurate forecasts than we get here in America. ECMWF uses advanced computer modeling techniques to analyze observations and predict future weather. Their assimilation system uses 40 million observations a day from more than 50 different instruments on satellites, and from many ground-based and airborne measurement systems.
Today DDN introduced the “industry’s fastest and most flexible” scale-out network attached storage (NAS) solution. As the newest product in the DDN GRIDScaler product family, the GS14K, delivers the speed and scale that data intensive environments need to accelerate analytics, increase reliability and integrate into modern workflows such as Hadoop, OpenStack and scale-out NAS environments. The GS14K is offered as an All Flash Array or Hybrid Storage platform, delivering the advantages of NAS data access with the high performance benefits of parallel files systems to support today’s modern, big data demands in an economical, easy to manage appliance.
“This meeting is open to all Dell HPC customers and partners. During the event, we will establish the Dell HPC Community as an independent, worldwide technical forum designed to facilitate the exchange of ideas among HPC professionals, researchers, computer scientists and engineers. Our core objective is to provide an environment in which members can candidly discuss industry trends and challenges, gather direct feedback and input from HPC professionals and influence the strategic direction and development of Dell HPC Systems and ecosystems.”
In this TACC Podcast, Jorge Salazar reports that scientists and engineers at the Texas Advanced Computing Center have created Wrangler, a new kind of supercomputer to handle Big Data.
“Buffered read performance under Lustre has been inexplicably slow when compared to writes or even direct IO reads. A balanced FDR-based Object Storage Server can easily saturate the network or backend disk storage using o_direct based IO. However, buffered IO reads remain at 80% of write bandwidth. In this presentation we will characterize the problem, discuss how it was debugged and proposed resolution. The format will be a presentation followed by Q&A.”
In this week’s Sponsored Post, Katie Garrison of One Stop Systems explains how Flash storage arrays are becoming more accessible as the economics of Flash becomes more attractive. “Comprised of a unique combination of a Haswell-based engine and 200TB Flash arrays, the FSA-SAN can be increased to a petabyte of storage with additional Flash arrays. Each 200TB array delivers 16 million IOPS, making it the ideal platform for high-speed data recording and processing with lightning fast data response time, high-availability and flexibility in the cloud.”
“UberCloud specializes in running HPC workloads on a broad spectrum of infrastructures, anywhere from national centers to public Cloud services. This session will be review of the learnings of UberCloud Experiments performed by industry end users. The live demonstration will cover how to achieve peak simulation performance and usability in the Cloud and national centers, using fast interconnects, new generation CPU’s, SSD drives and UberCloud technology based on Linux containers.”