Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


NetApp EF600 Storage Array Speeds HPC and Analytics

Today NetApp announced the NetApp EF600 storage array. The EF600 is an end-to-end NVMe midrange array that accelerates access to data and empowers companies to rapidly develop new insights for performance-sensitive workloads. “The storage industry is currently transitioning from the SAS to the NVMe protocol, which significantly increases the speed of access to data,” said Tim Stammers, senior analyst, 451 Research. “But conventional storage systems do not fully exploit NVMe performance, because of latencies imposed by their main controllers. NetApp’s E-Series systems were designed to address this architectural issue and are already used widely in performance-sensitive applications. The EF600 sets a new level of performance for the E-Series by introducing end-to-end support for NVMe, and should be considered by IT organizations looking for high-speed storage to serve analytics and other data-intensive applications.”

Video: Hyperion Research – Market Insight for HPC & AI

In this video, the Hyperion Research team describes how the company helps customers make fact-based decisions on technology purchases and business strategy. “Our industry experts are the former IDC high performance computing analyst team, which remains intact and continues all of its global activities. The group is comprised of the world’s most respected HPC industry analysts who have worked together for more than 25 years.”

Rigetti Computing acquires QxBranch for Quantum-powered Analytics

Today Rigetti Computing announced it has acquired QxBranch, a quantum computing and data analytics software startup. “Our mission is to deliver the power of quantum computing to our customers and help them solve difficult and valuable problems,” said Chad Rigetti, founder and C.E.O. of Rigetti Computing. “We believe we have the leading hardware platform, and QxBranch is the leader at the application layer. Together we can shorten the timeline to quantum advantage and open up new opportunities for our customers.”

High Performance Computing in the World of Artificial Intelligence

In this special guest feature, Thierry Pellegrino from Dell EMC writes that data analytics powered by HPC & AI solutions are delivering new insights for research and the enterprise. “HPC is clearly no longer reserved for large companies or research organizations. It is meant for those who want to achieve more innovation, discoveries, and the elusive competitive edge.”

Penguin Computing Breaks STAC-M3 Performance Records with WekaIO

Today Penguin Computing and WekaIO announced record performance on the STAC-M3 Benchmark. The STAC-M3 Antuco and Kanaga Benchmark Suites are the industry standard for testing solutions that enable high-speed analytics on time series data, such as tick-by-tick market data. “By combining Penguin Computing Relion servers and FrostByte Storage with the WekaIO File System, the companies have affirmed that this integrated solution is ideal for algorithmic trading and quantitative analysis workloads, common in financial services.”

High-Performance Big Data Analytics with RDMA over NVM and NVMe-SSD

Xiaoyi Lu from OSU gave this talk at the 2018 OpenFabrics Workshop. “The convergence of Big Data and HPC has been pushing the innovation of accelerating Big Data analytics and management on modern HPC clusters. Recent studies have shown that the performance of Apache Hadoop, Spark, and Memcached can be significantly improved by leveraging the high-performance networking technologies, such as Remote Direct Memory Access (RDMA). In this talk, we propose new communication and I/O schemes for these data analytics stacks, which are designed with RDMA over NVM and NVMe-SSD.”

David Bader from Georgia Tech Joins PASC18 Speaker Lineup

Today PASC18 announced that this year’s Public Lecture will be held by David Bader from Georgia Tech. Dr. Bader will speak on Massive-Scale Analytics Applied to Real-World Problems. “Emerging real-world graph problems include: detecting and preventing disease in human populations; revealing community structure in large social networks; and improving the resilience of the electric power grid. Unlike traditional applications in computational science and engineering, solving these social problems at scale often raises new challenges because of the sparsity and lack of locality in the data, the need for research on scalable algorithms and development of frameworks for solving these real-world problems on high performance computers, and for improved models that capture the noise and bias inherent in the torrential data streams. This talk will discuss the opportunities and challenges in massive data-intensive computing for applications in social sciences, physical sciences, and engineering.”

Rock Stars of HPC: DK Panda

As our newest Rock Star of HPC, DK Panda sat down with us to discuss his passion for teaching High Performance Computing. “During the last several years, HPC systems have been going through rapid changes to incorporate accelerators. The main software challenges for such systems have been to provide efficient support for programming models with high performance and high productivity. For NVIDIA-GPU based systems, seven years back, my team introduced a novel `CUDA-aware MPI’ concept. This paradigm allows complete freedom to application developers for not using CUDA calls to perform data movement.”

Dr. Eng Lim Goh presents: HPC & AI Technology Trends

Dr. Eng Lim Goh from Hewlett Packard Enterprise gave this talk at the HPC User Forum. “SGI’s highly complementary portfolio, including its in-memory high-performance data analytics technology and leading high-performance computing solutions will extend and strengthen HPE’s current leadership position in the growing mission critical and high-performance computing segments of the server market.”

The Long Rise of HPC in the Cloud

“As the cloud market has matured, we have begun to see the introduction of HPC cloud providers and even the large public cloud providers such as Microsoft are introducing genuine HPC technology to the cloud. This change opens up the possibility for new users that wish to either augment their current computing capabilities or take the initial plunge and try HPC technology without investing huge sums of money on an internal HPC infrastructure.”