“The findings of a recent IDC study on the cybersecurity practices of U.S. businesses reveal a wide spectrum of attitudes and approaches to the growing challenge of keeping corporate data safe. While the minority of cybersecurity “best practitioners” set an admirable example, the study findings indicate that most U.S. companies today are underprepared to deal effectively with potential security breaches from outside or inside their firewalls.”
Manufacturing is enjoying an economic and technological resurgence with the help of high performance computing. In this insideHPC webinar, you’ll learn how the power of CAE and simulation is transforming the industry with faster time to solution, better quality, and reduced costs.
In this week’s industry Perspective, Katie Garrison of One Stop Systems explains how GPUltima allows HPC professionals to create a highly dense compute platform that delivers a petaflop of performance at greatly reduced cost and space requirements.compute power needed to quickly process the amount of data generated in intensive applications.
Although liquid cooling is considered by many to be the future for data centers, the fact remains that there are some who do not yet need to make a full transformation to liquid cooling. Others are restricted until the next budget cycle. Whatever the reason, new technologies like Internal Loop are more affordable than liquid cooling and can replaces less efficient air coolers. This enables HPC data centers to still utilize the highest performing CPUs and GPUs.
Data accumulation is just one of the challenges facing today weather and climatology researchers and scientists. To understand and predict Earth’s weather and climate, they rely on increasingly complex computer models and simulations based on a constantly growing body of data from around the globe. “It turns out that in today’s HPC technology, the moving of data in and out of the processing units is more demanding in time than the computations performed. To be effective, systems working with weather forecasting and climate modeling require high memory bandwidth and fast interconnect across the system, as well as a robust parallel file system.”
Dr. Lewey Anton reports on who’s moving on up in High Peformance Computing. Familiar names in this edition include: Sharan Kalwani, John Lee, Jay Muelhoefer, Brian Sparks, and Ed Turkel. And be sure to let us know of HPC folks in new positions!
In this video from SC15, Dr. Eng Lim Goh from SGI describes how the company is embracing new HPC technology trends such as new memory hierarchies. With the convergence of HPC and Big Data as a growing trend, SGI is envisions a “Zero Copy Architecture” that would bring together a traditional supercomputer with a Big Data analytics machine in a way that would not require users to move their data between systems.
Lustre was originally developed as the fastest scratch file system for HPC workloads that supercomputer centers could get, but has over the years matured to be an enterprise-class parallel file system supporting mission-critical workloads. Unfortunately, in spite of Lustre having become extremely attractive to enterprises and adopted by IT departments, some naysayers continue toclaim that Lustre is still just a scratch file system. We in the Lustre community see quite a different picture.
SC15 has released results from the Supercomputing conference in Austin. This year, the conference drew a record 12,868 attendees, including 4,829 who registered for the six-day Technical Program of invited speakers, technical papers, research posters, tutorials, workshops and more.
“Just as representative benchmarks like HPCG are set to replace Linpack, so a focus on software is taking over. From industry analysts to users at SC15 we heard that software is the number one challenge and the number one opportunity to have world-class impact.”