Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


SC20 Announces Record Number of Teams for Annual Student Cluster Competition

This year’s Student Cluster Competition at SC20 will include two firsts: it will involve the most number of teams (19) in the competition’s 14-year history, and it will for the first time be held completely virtually. “This year’s Student Cluster Competition will be very different, as it will be 100 percent cloud-based,” explained SC20 SCC […]

DOE, White House Announce Members of U.S. Quantum Advisory Committee

Technologists from the national labs, universities, federal agencies and industry have been named by the U.S. Department of Energy and the White House Office of Science and Technology Policy (OSTP) to the National Quantum Initiative Advisory Committee (NQIAC). Announced today, the NQAIC’s mission is to “counsel the Administration on ways to ensure continued American leadership […]

ACM Doctoral Dissertation Award Goes to Tel Aviv University Graduate

The Association for Computing Machinery (ACM) today announced that Dor Minzer receives the 2019 ACM Doctoral Dissertation Award for his dissertation, “On Monotonicity Testing and the 2-to-2-Games Conjecture.” The key contributions of Minzer’s dissertation are settling the complexity of testing monotonicity of Boolean functions and making a significant advance toward resolving the Unique Games Conjecture, […]

Intel, NSF Name Winners of Wireless Machine Learning Research Funding

Intel and the National Science Foundation (NSF), joint funders of the Machine Learning for Wireless Networking Systems (MLWiNS) program, today announced recipients of awards for research projects into ultra-dense wireless systems that deliver the throughput, latency and reliability requirements of future applications – including distributed machine learning computations over wireless edge networks. Here are the […]

CMU’s Jerry Wang Wins 2020 Frederick Howes Award

Carnegie Mellon University Assistant Professor Gerald “Jerry” Wang has been named the 2020 Frederick A. Howes Scholar in Computational Science for his work in nanoscale fluid flows. Wang, a fellow from 2014-2018, earned his mechanical engineering doctorate from the Massachusetts Institute of Technology in 2019. His thesis focused on the structure of fluids moving through confined spaces, especially in nanotubes thousands of times thinner than a hair.

New Memristors at MIT: Networks of Artificial Brain Synapses for Neuromorphic Devices

A possible glimpse at a future form of high performance edge computing – networks of artificial brain synapses – developed by engineers at the Massachusetts Institute of Technology is showing promise as a new memristor design for neuromorphic devices, which mimic the neural architecture in the human brain. Published today in Nature Nanotechnology, results of […]

Video: Heterogeneous Computing at the Large Hadron Collider

In this video, Philip Harris from MIT presents: Heterogeneous Computing at the Large Hadron Collider. “Only a small fraction of the 40 million collisions per second at the Large Hadron Collider are stored and analyzed due to the huge volumes of data and the compute power required to process it. This project proposes a redesign of the algorithms using modern machine learning techniques that can be incorporated into heterogeneous computing systems, allowing more data to be processed and thus larger physics output and potentially foundational discoveries in the field.”

Visualizing an Entire Brain at Nanoscale Resolution

In this video from SC19, Berkeley researchers visualizes an entire brain at nanoscale resolution. The work was published in the journal, Science. “At the core of the work is the combination of expansion microscopy and lattice light-sheet microscopy (ExLLSM) to capture large super-resolution image volumes of neural circuits using high-speed, nano-scale molecular microscopy.”

Deep Learning State of the Art in 2020

Lex Fridman gave this talk as part of the MIT Deep Learning series. “This lecture is on the most recent research and developments in deep learning, and hopes for 2020. This is not intended to be a list of SOTA benchmark results, but rather a set of highlights of machine learning and AI innovations and progress in academia, industry, and society in general.”

FPGAs and the Road to Reprogrammable HPC

In this special guest feature from Scientific Computing World, Robert Roe writes that FPGAs provide an early insight into possibile architectural specialization options for HPC and machine learning. “Architectural specialization is one option to continue to improve performance beyond the limits imposed by the slow down in Moore’s Law. Using application-specific hardware to accelerate an application or part of one, allows the use of hardware that can be much more efficient, both in terms of power usage and performance.”