From interviews with the people and companies making news in the HPC community, to in-depth video features that examine pressing technological and social issues in supercomputing, this is exclusive content you’ll only see at insideHPC.com.
In this slidecast, Ken Claffey from Xyratex describes the company’s new ClusterStor 1500 storage system. Designed for scale-out HPC storage solutions, the ClusterStor 1500 delivers HPC performance and efficiency with help from the Lustre file system.
Departments within larger organizations or medium-sized enterprises today, especially in the commercial, academic and government sectors, represent an underserved market. They need high-performance and scalable storage solutions that are cost-efficient, easy to deploy and manage and reliable even under heavy workloads,” said Ken Claffey, senior vice president of the ClusterStor business at Xyratex. “Growth in this market segment is being driven by the increasing adoption of simulation applications in a wide range of industries from car and aircraft design to chemical interactions and financial modeling. Traditional enterprise storage systems are simply not designed to meet the performance needs of these applications, so we engineered and built the affordable and modular ClusterStor 1500 to bring the performance power of Lustre to this underserved and growing market in the way that only ClusterStor can.”
With the ability to scale performance from 1.25GB/s to 110GB/s and raw capacity from 42TB to 7.3PB, ClusterStor 1500 is purpose-built to satisfy data intensive department level compute cluster needs, ClusterStor 1500 is designed to provide best in class scale-out storage for middle tier high performance computing environments. The ClusterStor 1500 solution features scale-out storage building blocks, the Lustre parallel filesystem and a comprehensive management platform that eliminates the guesswork usually associated with building and optimizing your own HPC storage solution.
Now that the deployment of the 1 Terabyte/sec file system at Blue Waters has been completed, what comes next? In this video from the Xyratex Blog, John Fragalla, principal solutions architect at Xyratex, discusses the value that ClusterStor brings to the HPC market and what the company has learned from designing and deploying ClusterStor solutions.
In this video from the Lustre User Group 2013, Hugo Falter from ParTec presents: LUG2013 EOFS Update. As Director of the EOFS Administrative Council, Falter provides an excellent overview on what’s going on with European supercomputing initiatives.
Processing the vast quantities of data produced by the SKA will require very high performance central supercomputers capable of 100 petaflops per second processing power. This is about 50 times more powerful than the most powerful supercomputer in 2010 and equivalent to the processing power of about one hundred million PCs.
There is a fierce competition on the storage market to offer the best performing devices, with great management at a low price. The EIOW group, from the outset, decided that it would not attempt to offer an end-to-end solution, which would necessarily involve competing instead of working with storage providers. The focus of EIOW is on middleware to provide, for example, schemas describing data structure and layout, novel access methods to data for applications, a uniform data management infrastructure and a framework for the implementation of layered I/O software, similar in spirit to HDF5 as a specialized use of a parallel file system. We decided EIOW should be open, and have interfaces to layer on lower level storage infrastructure such as object stores, databases and file systems as provided by storage providers, to allow their expertise and leadership in this area to continue to benefit the HPC community.
We have invented a unique approach to building a fabric across a large number of Ethernet switches, and built a comprehensive technology platform based on this Flexible Radix Switching (FRS) technique. This innovation enables transparent integration with existing data center solutions and big improvements to networks supporting cloud, virtualization, and big data applications. These data center network solutions are superior in terms of cost, performance, robustness and ease of use.
Indiana University has contributed Big Data expertise and infrastructure to NASA’s Operation IceBridge, a decade-long polar ice monitoring project.
For the past four years, IU Research Technologies, a cyberinfrastructure and service center affiliated with the Pervasive Technology Institute (PTI), has provided IT support for the Center for Remote Sensing of Ice Sheets (CReSIS), a National Science Foundation Science and Technology Center led by the University of Kansas. Kansas scientists provide NASA with the radar technology that measures the physical interactions of polar ice sheets in Greenland, Chile and Antarctica. IU experts bring innovative data management and storage solutions to the missions.
Essentially, IU has built a supercomputer that can fly,” said Rich Knepper, manager of IU’s campus bridging and research infrastructure team within Research Technologies. “During this current mission, our system provided analysis of radar data as the data was collected – in real time — allowing mission scientists to see the ice bed information as the plane flies over the Arctic.”
Dr Ben Evans, Bloodhound SSC’s Computational Fluid Dynamics (CFD) Engineer.
HPC Wales is contributing to the dream of building the world’s car that can break the 1000 mph barrier. Launched in 2007 with the intention of building a rocket-powered car capable of attaining supersonic speeds, the Bloodhound project also aims to inspire young people to take up careers in science and engineering making all of its research and design material available to teachers, students and visitors.
Using high performance computers is the only way, really, you can do realistic flow simulations for a vehicle as complex as this,” said Dr Ben Evans, Bloodhound SSC’s Computational Fluid Dynamics (CFD) Engineer. There are lots of things we need to understand about the aerodynamics of the vehicle to make sure that it’s safe; we need to understand where the loads are distributed across the vehicle, we need to understand if we’ve got the drag (the resistive force of the air that will be pushing on the car) as low as it can be so that our engines can propel us to the speeds we are going for; and to do the modelling to understand all of that requires some incredibly complex calculations to process massive amounts of data. HPC Wales has been invaluable to us, simply because of the size of the machine and the amount of resource that we’ve got access to. It allows us to run simulations much quicker than we’ve ever been able to do before, which has allowed us to run more simulations than we’ve ever been able to do before. This has allowed us to understand this vehicle better than really we’d ever hoped to be able to do at this stage of the project.