Across industries, companies are beginning to watch the convergence of High-performance Computing (HPC) and Big Data. Many organizations in the Financial Services Industry (FSI) are running their financial simulations on business analytics systems, some on HPC clusters. But they have a growing problem: integrating analytics of non-structured data from sources like social media with their internal data. Learn how Lustre can help solves these challenges.
OpenSFS is sponsoring two Lustre BoFs at SC15 in Austin. As a nonprofit organization, OpenSFS was founded in 2010 to advance Lustre development, ensuring it remains vendor-neutral, open, and free.
“In business and commercial computing, momentum towards cloud and big data has already built up to the point where it is unstoppable. In technical computing, the growth of the Internet of Things is pressing towards convergence of technologies, but obstacles remain, in that HPC and big data have evolved different hardware and software systems while Open Stack, the Open Source cloud computing platform, does not work well with HPC.”
Today Atos announced that the company has installed the first Petascale supercomputer in Brazil. Designed by Bull, the “Santos Dumont” system will be the largest supercomputer in Latin America. “We are very proud to equip Brazil with a world-class, Petascale High-Performance Computing (HPC) infrastructure and to launch a R&D Center in Petrópolis that is fully integrated with our global R&D,” said Philippe Vannier, Vice-President Executive and Chief Technology Officer at Atos. “With a presence in this country stretching back over more than 50 years, the collaborative ties that bind Bull and now Atos to Brazil in terms of leading-edge technologies are significant.”
Companies already using High-performance Computing (HPC) with a Lustre file system for simulations, such as those in the financial, oil and gas, and manufacturing sectors, want to convert some of their HPC cycles to Big Data analytics. This puts Lustre at the core of the convergence of Big Data and HPC.
Lustre* is not just for the national labs any longer. It was born out of serving up data extremely fast to the world’s most powerful HPC clusters using parallel I/O to improve performance and scalability. Here are five reasons why Lustre is enterprise-ready.
Although there are a number of truly huge implementations of Lustre today, the community is still far from reaching the maximum configurations that the Lustre architecture is designed for. Inside the Lustre File System describes the basics of how the Lustre File System operates with descriptions of the newest features.
Another year has passed and, with it, more growth for Lustre and the community around it. This year, as well as welcoming many new HPC sites to the community, we also delight in the evidence of activity from non-traditional Lustre sites and users.
There is always different levels of importance assigned to various data files in a computer system, specifically a very large system that is storing petabytes of data. In order to maximize the use of the highest speed storage, Hierarchical Storage Management (HSM) was developed to move and store data within easy use of users, yet at the appropriate speed and price.
The white paper, Inside the Lustre File System, describes the inner workings of Lustre in a way that is easy to understand, yet is technical enough for many users and systems administrators. Lustre is a mature and stable file system that has consistently been able to respond to the needs of organizations that require high performance throughput and expanding capacity.