In this video, Mark Henderson and Mike Pehlan from NetApp demonstrate how Dynamic Disk Pools enable performance, efficiency, and scalability. Watch as they cause a double-drive failure on two separate storage systems and then compare the RAID-6 system with a three-day rebuild to a system with Dynamic Disk Pools that can rebuild in around 18 hours.
DDP distributes data, parity information, and spare capacity across a pool of drives. Its intelligent algorithm (seven patents pending) defines which drives are used for segment placement, ensuring full data protection. DDP dynamic rebuild technology uses every drive in the pool to rebuild a failed drive, enabling exceptional performance under failure. Flexible disk-pool sizing optimizes utilization of any configuration for maximum performance, protection, and efficiency.
Over at CIO Magazine, Henry Newman from Instrumental writes about the tradeoffs to consider when selecting the right CPU technology for your storage servers.
For at least this year, the two server CPU choices remain Intel and AMD. ARM might solve some of the computational parts of some of the problems, but in 2013, ARM won’t have enough I/O bandwidth with 10 Gigabit Ethernet ports and storage to make it a viable alternative. This might change for 2014, but it’s too soon to predict as development of PCIe buses with enough performance capability is complex.
Today Panasas announced that ATK (Alliant Techsystems, Inc.), has standardized on Panasas ActiveStor to help power its demanding research and product performance simulation processes. ATK is the world’s top producer of rocket propulsion systems and a leading supplier of military and commercial aircraft structures.
It is crucial that ATK engineers use state-of-the art systems in order to support our research and product design applications at the highest levels of performance and uptime,” said Ramesh Krishnan, senior staff engineer, engineering process and tools at ATK Aerospace. “Panasas ActiveStor speeds our design and simulation processes, saving us significant time and money.”
According to Panasas, ActiveStor delivers high performance and massive scalable capacity for ATK’s design work flows, including: combustion modeling, computational fluid dynamics, mechanical design and flight simulation, all within a single, unified global namespace. Read the Full Story.
In this video, Xyratex CEO Steve Barber discusses the company’s accomplishments in 2012 as well as their strategy for 2013.
I believe that we are executing well on the long-term strategy that we outlined in October and we have positive results with new customer wins and opportunities heading into fiscal year 2013,” said Xyratex CEO Steve Barber. “We have made significant progress in the High Performance Computing data storage market with partners such as Cray Inc, DELL and HP, and I believe we are well positioned with our unique IP to deliver greater value for our customers and partners,” said Steve Barber, CEO of Xyratex. “Over the next 18 to 24 months, we have a number of new opportunities, particularly in the areas of High Performance Computing data storage and Big Data, that I believe will be positive for the company.”
In related news, Xyratex recently released their earnings report for 2012. Read the Full Story.
In this slidecast, Eric Barton, Lead Architect for Intel’s High Performance Data Division presents a progress update on the Fast Forward I/O & Storage program.
Back in July 2012, Whamcloud was awarded the Storage and I/O Research & Development subcontract for the Department of Energy’s FastForward program. Shortly afterward, the company was acquired by Intel. The two-year contract scope includes key R&D necessary for a new object storage paradigm for HPC exascale computing, and the developed technology will also address next-generation storage mechanisms required by the Big Data market.
The subcontract incorporates application I/O expertise from the HDF Group, system I/O and I/O aggregation expertise from EMC Corporation, object storage expertise from DDN, and scale testing facilities from Cray, teamed with file system, architecture, and project management skills from Whamcloud. All components developed in the project will be open sourced and benefit the entire Lustre community.
This is a fascinating presentation for those interested in how an Exascale system might handle data, and the prototype that comes out of it may well represent the roadmap to the future of supercomputing.
This award underscores the rapid adoption of our ClusterStor family of storage solutions, and the tremendous value it brings to data-intensive computing environments,” said Ken Claffey, senior vice president of the ClusterStor business at Xyratex. “The introduction of the ClusterStor 6000 was an important milestone for us, and in collaboration with our partners we’re helping end users achieve best-in-class performance, reliability and scalability – including implementing the fastest data storage system in the world.”
Today Fraunhofer in Germany announced a new release of the FhGFS parallel file system with major performance and HA improvements. Available as a free download, FhGFS features a distributed metadata architecture designed to provide the scalability and flexibility required to run today’s most demanding HPC applications.
Many of the improvements result in a significant increase in performance and scalability, especially in terms of metadata operations. In benchmarks performed with the beta release, a single metadata server could create about 35.000 files per second and by using 20 metadata servers to distribute the load, it was possible to achieve file creation rates of over 500.000 operations per second. These and other benchmarking results were shown at SC12 in Salt Lake City.
FhGFS adds an important capability by introducing on-the-fly replication of file contents and metadata with this major release. With this step towards built-in high availability, FHGFS is moving towards an enterprise-grade parallel file system. Read the Full Story or check out the release document.
We are looking for a Senior Systems Engineer to be an individual contributor responsible for consultative pre-sales and post-sales support activities for customers. The pre-sales role will include both customer-facing, and customer-remote operations, including but not limited to overall architecture, planning and implementation of development and production systems in the customer’s environment. The post-sales role will also include customer problem triage, diagnosis, intervention, resolution, documentation, and customer followup.
Are you paying too much for your job ads?Not only do we offer ads for a fraction of what the other guys charge, our insideHPC Job Board is powered by SimplyHIred, the world’s largest job search engine.
As a reminder, we are offering FREE job listings for .EDU and .GOV domains, so email us at: info @ insideHPC.com
for a special discount code.
In this video, Xyratex CEO Steve Barber discusses the company’s move to HPC markets with ClusterStor Lustre-based storage systems.
Looking forward, we are leveraging our years of unique knowledge and experience to create and deliver fresh, ground-breaking design approaches to enterprise class storage that meet the specific needs of High Performance Computing, Big Data and Cloud.
Now available through partner/resellers including Cray, Dell, and HP, ClusterStor continues to gain traction in the HPC space. At insideHPC, we think Xyratex is one company to watch.
Scaling CFD and UQ codes on Sequoia. Ivan Bermejo-Moreno, Sanjeeb Bose, Joe Nichols, Curtis Hamman, Francisco Palacios and Julien Bodart, Stanford University Predictive Science Academic Alliance Program (PSAAP) and Center for Turbulence Research
Programming Models and their Designs for Exascale Systems. Dhabaleswar K. Panda, Ohio State University
Energy Efficiency and its Impact on Requirements for Future Programming Environments. John Shalf, Lawrence Berkeley National Laboratory
The RAMCloud project. Ankita Kejriwal, Stanford
Charm++: HPC with migratable objects. Laxmikant Kale, University of Illinois at Urbana-Champaign
The future of network-based storage. Brent Gorda, Intel
The event is free to attend and includes lunch on both days. Register now.
Over at Enterprise Storage Forum, Henry Newman looks at the future of file systems and examines whether REST will overtake POSIX as an interface of choice for all applications.
We do not have a lot of POSIX file systems that scale today to 10s of PB and billions of files. There are three file systems in production with a parallel namespace (Gluster, PAN-FS, Lustre, and GPFS) and a new entry called Ceph. Ceph, GPFS Lustre and Pan-FS support parallel I/O, which is I/O from multiple threads (these threads could be running on multiple nodes) to a single file, but Gluster does not. On the other side there are dozens of vendors developing REST- and SOAP-based object management interfaces. Vendors are trying to create systems that support billions of objects in a single namespace. Given that the vendors are not constrained by the POSIX atomicity requirements and support for parallel I/O, this is far easier than developing this support inside a POSIX file system.
This week Panasas announced that the UK’s University of Nottingham has upgraded its HPC center with Panasas ActiveStor 12 storage in a 240 terabyte deployment. The new cluster is used by numerous departments across the university, including computer science, pharmacy and engineering.
We are delighted that the University of Nottingham chose Panasas to satisfy its HPC storage requirements,” said Barbara Murphy, chief marketing officer at Panasas. “ActiveStor gives the university unmatched performance, scalability and reliability without complex and time-consuming system management. We look forward to continuing to work with the university, as well as our many other academic customers in the region.”
The industry really needs more than POSIX (open/fopen, read/fread,write/fwrite) and more than simple REST put/get interfaces for data in the future. Neither has the richness to address the myriad of polices that are needed in our future world. I predict that there will finally be some honest discussion about this amongst the customers that need it and the vendors that could create it. Maybe this should be my request to Santa. I have tried to encourage this discussion for years and I have gotten no traction.
The Leibniz Supercomputing Centre (LRZ) has implemented an innovative IBM tape storage system to provide up to 16.5 petabytes of scientific data archiving and backup for the center’s SuperMUC supercomputer. Built with an innovative hot water cooling system, the SuperMUC combines 155,000 general purpose core processors with 320 terabytes of main memory to help scientists from across Europe study all fields of science.
What we needed was a system that could store the data streams of one of the fastest computers in Europe, using standard components to keep costs low,” said Werner Baur, director of the Storage Group at LRZ. “It had to be scalable so that it is able to keep up with the development stages of the SuperMUC and it had to be able to integrate with our IT environment. That’s exactly what we’ve got.”
The intelligent archiving solution consists of two highly scalable IBM System Storage TS3500 Tape Library systems equipped with 22 LTO 5 drives and 11,000 tape cartridges. All told, the solution has a storage capacity of 16.5 petabytes and is scalable to 40 petabytes. An IBM System x3850 acts as the archive server and is responsible for the management of metadata, the control of the mass storage device, and the control of the data flow. To ensure fast access to archived data, IBM System Storage DS3500 and IBM Storwize V7000 systems are used as high capacity disks, along with 6 terabytes of solid state drive (SSD) memory. Read the Full Story.