Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Mark III Systems Becomes Cray Solutions Provider

Today Cray announced it has signed a solutions provider agreement with Mark III Systems to develop, market and sell solutions that leverage Cray’s portfolio of supercomputing and big data analytics systems. “We’re very excited to be partnering with Cray to deliver unique platforms and data-driven solutions to our joint clients, especially around the key opportunities of data analytics, artificial intelligence, cognitive compute, and deep learning,” said Chris Bogan, Mark III’s director of business development and alliances. “Combined with Mark III’s full stack approach of helping clients capitalize on the big data and digital transformation opportunities, we think that this partnership offers enterprises and organizations the ability to differentiate and win in the marketplace in the digital era.”

Panel Discussion on Disruptive Technologies for HPC

In this video from the HPC User Forum, Bob Sorensen from Hyperion Research moderates a panel discussion on Disruptive Technologies for HPC. “A disruptive innovation is an innovation that creates a new market and value network and eventually disrupts an existing market and value network, displacing established market leading firms, products and alliances. The term was defined and phenomenon analyzed by Clayton M. Christensen beginning in 1995.”

Argonne Seeking Proposals to Advance Big Data in Science

The Argonne Leadership Computing Facility Data Science Program (ADSP) is now accepting proposals for projects hoping to gain insight into very large datasets produced by experimental, simulation, or observational methods. The larger the data, in fact, the better. Applications are due by June 15, 2017.

DOE’s INCITE Program Seeks Advanced Computational Research Proposals for 2018

Today the Department of Energy’s Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program announced it is accepting proposals for high-impact, computationally intensive research campaigns in a broad array of science, engineering, and computer science domains. DOE’s Office of Science plans to award over 6 billion supercomputer processor-hours at Argonne National Laboratory and […]

NYU Hosts Advanced Computing for Competitiveness Forum on April 13

The New York University Center for Urban Science and Progress will host the Advanced Computing for Competitiveness Forum on April 13. Sponsored by the U.S. Council on Competitiveness, the day-long event will look at why “To out-compete is to out-compute.” The Council’s landmark Advanced Computing Roundtable (ACR) – formerly the High Performance Computing (HPC) Initiative – is the preeminent forum for experts in advanced computing to set a national agenda on how such technologies should be leveraged for U.S. comptitiveness. Advanced computing includes technologies such as high performance computing, artificial intelligence (AI), and the Internet of Things (IoT). ACR members represent industrial and commercial advanced computing users, hardware and software vendors and directors of academic and national laboratory advanced computing centers.”

GW4 Unveils ARM-Powered Isambard Supercomputer from Cray

Today the GW4 Alliance in the UK unveiled Isambard, the world’s first ARM-based production supercomputer at today’s Engineering and Physical Sciences Research Council (EPSRC) launch at the Thinktank science museum in Birmingham. “Isambard is able to provide system comparison at high speed as it includes over 10,000, high-performance 64-bit ARM cores, making it one of the largest machines of its kind anywhere in the world. Such a machine could provide the template for a new generation of ARM-based services.”

GTC to Feature 90 Sessions on HPC and Supercomputing

Accelerated computing continues to gain momentum. This year the GPU Technology Conference will feature 90 sessions on HPC and Supercomputing. “Sessions will focus on how computational and data science are used to solve traditional HPC problems in healthcare, weather, astronomy, and other domains. GPU developers can also connect with innovators and researchers as they share their groundbreaking work using GPU computing.”

Compressing Software Development Cycles with Supercomputer-based Spark

“Do you need to compress your software development cycles for services deployed at scale and accelerate your data-driven insights? Are you delivering solutions that automate decision making & model complexity using analytics and machine learning on Spark? Find out how a pre-integrated analytics platform that’s tuned for memory-intensive workloads and powered by the industry leading interconnect will empower your data science and software development teams to deliver amazing results for your business. Learn how Cray’s supercomputing approach in an enterprise package can help you excel at scale.”

SDSC Seismic Simulation Software Exceeds 10 Petaflops on Cori Supercomputer

Researchers at SDSC have developed a new seismic software package with Intel Corporation that has enabled the fastest seismic simulation to-date. SDSC’s ground-breaking performance of 10.4 Petaflops on earthquake simulations used 612,000 Intel Xeon Phi processor cores of the new Cori Phase II supercomputer at NERSC.

PRACE Publishes Best Practices for GPU Computing

The European PRACE initiative has published a Best Practices Guide for GPU Computing. “This Best Practice Guide describes GPUs: it includes information on how to get started with programming GPUs, which cannot be used in isolation but as “accelerators” in conjunction with CPUs, and how to get good performance. Focus is given to NVIDIA GPUs, which are most widespread today.”