MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Analytics Frameworks

This whitepaper is an excellent summary of how a next generation platform can be developed to bring a wide range of data to life, giving users the ability to take action when needed. Organizations that need to deal with massive amounts of data but are having challenges figuring out how to make sense of all of the data should read this whitepaper.

Supercomputer Power Management

Today’s HPC supercomputers have significant power requirements that must be considered as part of their Total Cost of Ownership. In addition, efficient power management capabilities are critical to sustained return on investment.

Seeking Submissions for the SC16 Impact Showcase

“Organizations who are currently employing high performance computing to advance their competitiveness and innovation in the global marketplace can highlight their compelling/interesting/novel real-world applications at SC16’s HPC Impact Showcase. The Showcase is designed to introduce attendees to the many ways that HPC matters in our world, through testimonials from companies large and small. Rather than a technical deep dive of how they are using or managing their HPC environments, their stories are meant to tell how their companies are adopting and embracing HPC as well as how it is improving their businesses. Last year’s line-up included presentations on topics from battling ebola to designing at Rolls-Royce. It is not meant for marketing presentations. Whether you are new to HPC or a long-time professional, you are sure to learn something new and exciting in the HPC Impact Showcase.”

Interview: Dr. Eng Lim Goh on the Latest Trends in High Performance Data Analytics

In this video from ISC 2016, Dr. Eng Lim Goh from SGI discusses the latest trends in high performance data analytics and machine learning. “Dr. Eng Lim Goh joined SGI in 1989, becoming a chief engineer in 1998 and then chief technology officer in 2000. He oversees technical computing programs with the goal to develop the next generation computer architecture for the new many-core era. His current research interest is in the progression from data intensive computing to analytics, machine learning, artificial specific to general intelligence and autonomous systems. Since joining SGI, he has continued his studies in human perception for user interfaces and virtual and augmented reality.”

Lustre Powers FrostByte HPC Storage from Penguin Computing

FrostByte is a complete solution that integrates Penguin Computing’s new Scyld FrostByte software with an optimized high-performance storage platform. FrostByte will support multiple open software storage technologies including Lustre, Ceph, GlusterFS and Swift, and will first be available with Intel Enterprise Edition for Lustre. The entry-level FrostByte is a single rack with 500TB of highly available storage that can deliver up to 18GB/s and 500K/s metadata ops/s over Intel Omni-Path, Mellanox EDR InfiniBand or Penguin Arctica 100GbE network solutions. A single FrostByte “Scalable Unit” can deliver up to 15PB and greater than 500GB/s in 5 racks. Multiple Scalable Units can be combined to scale up to 100s of petabytes and 10s of terabytes/sec of aggregate storage bandwidth.

Dell Brings Supercomputing Power to Mainstream Enterprises

“While traditional HPC has been critical to research programs that enable scientific and societal advancement, Dell is mainstreaming these capabilities to support enterprises of all sizes as they seek a competitive advantage in an ever increasing digital world,” said Jim Ganthier, vice president and general manager, Dell Engineered Systems, Cloud and HPC. “As a clear leader in HPC, Dell now offers customers highly flexible, precision built HPC systems for multiple vertical industries based upon years of experience powering the world’s most advanced academic and research institutions. With Dell HPC Systems, our customers can deploy HPC systems more quickly and cost effectively and accelerate their speed of innovation to deliver both breakthroughs and business results.”

At ISC 2016, The Times, They Are A-Changin’

In this special guest feature, Kim McMahon and Brian E. Whitaker share their perspectives on the supercomputer industry from ISC 2016 in Frankfurt. “ISC hosts 147 exhibitors this year across hardware, software, and services, and many have more to say than just marketecture – they’re bringing real opportunities and capabilities to market. Companies who once left ISC are returning, including NVIDIA and NetApp, because HPC and HPC-like technologies are becoming a critical facet of IT across all verticals and sectors. For an enterprise vendor, if you’re not participating, and not displaying real insights into HPC, you’re impeding your credibility with customers everywhere.”

DDN Launches ES14K Storage Appliance with Intel Lustre for the Enterprise

“With more than a decade of experience in designing, installing and supporting Lustre-based storage, DDN is the most experienced Lustre provider and has worked closely with us over many years to design optimized Lustre-based storage systems. DDN’s latest ES14K offering delivers a high-performing, high density appliance for the HPC market built on Intel Enterprise Edition for Lustre,” said Brent Gorda, GM of Intel’s High Performance Data Division.

Learnings from Operating 200 PB of Disk-Based Storage

Gleb Budman from Backblaze presented this talk at the 2016 MSST Conference. “For Q1 2016 we are reporting on 61,590 operational hard drives used to store encrypted customer data in our data center. In Q1 2016, the hard drives in our data center, past and present, totaled over one billion hours in operation to date. That’s nearly 42 million days or 114,155 years worth of spinning hard drives. Let’s take a look at what these hard drives have been up to.”

insideHPC Guide to Production Supercomputing

While HPC has its roots in academia and government where extreme performance was the primary goal, high performance computing has evolved to serve the needs of businesses with sophisticated monitoring, pre-emptive memory error detection, and workload management capabilities. This evolution has enabled “production supercomputing,” where resilience can be sustained without sacrificing performance and job throughput.