Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


MERIL-2 Project Launches Tool for Mapping Research Infrastructure in Europe

Today the European Commission MERIL-2 project introduced a new Data Visualization tool that allows users to discover all of the European research infrastructures in the MERIL database interactively. With the new tool, users can quickly see and explore data on the European research landscape, such as RI size and location, user profiles, and research capabilities of over 1,000 research facilities across the continent.

Codeplay Releases First Fully-Conformant SYCL 1.2.1 Solution for C++

SYCL is an open standard developed by the Khronos Group that enables developers to write code for heterogeneous systems using standard C++. Developers are looking at how they can accelerate their applications without having to write optimized processor specific code. SYCL is the industry standard for C++ acceleration, giving developers a platform to write high-performance code in standard C++, unlocking the performance of accelerators and specialized processors from companies such as AMD, Intel, Renesas, and Arm.

Handling and Processing Data from the Cherenkov Telescope Array

Etienne Lyard from the University of Geneva, Switzerland presents: Handling and Processing Data from the Cherenkov Telescope Array.
gave this talk at PASC18. “The Cherenkov Telescope Array (CTA) will be the world’s largest and most sensitive high-energy gamma-ray observatory. Composed of more than 100 telescopes of different sizes between 4 and 23 meters in diameter, it will operate from two separate sites in Chile and at the Canary Islands. It will generate up to 10PB of raw data per year that will be stored in a distributed archive. This talk will outline the current status, plans and challenges that we face to implement the analysis and data  pipeline of CTA.”

PASC18 Panel Discussion Big Data vs. Fast Computation

“The panelists will discuss the critical challenges facing key HPC application areas in the next 5-10 years, based on a mix of knowledge and speculation. They will explore whether we need to make radical changes to our practices, methods, tools, and techniques to be able to use modern resources and make faster and bigger progress on our scientific problems.”

Julia 1.0 release Opens the Doors for a Connected World

Today Julia Computing announced the Julia 1.0 programming language release. As the first complete, reliable, stable and forward-compatible Julia release, version 1.0 is the fastest, simplest and most productive open-source programming language for scientific, numeric and mathematical computing. “During the last six and a half years, Julia has reached more than 2 million downloads and early adopters have already put Julia into production to power self-driving cars, robots, 3D printers and applications in precision medicine, augmented reality, genomics, energy trading, machine learning, financial risk management and space mission planning.”

Dr. Eng Lim Goh presents: Prediction – Use Science or History?

Dr. Eng Lim Goh from HPE gave this keynote talk at PASC18. “Traditionally, scientific laws have been applied deductively – from predicting the performance of a pacemaker before implant, downforce of a Formula 1 car, pricing of derivatives in finance or the motion of planets for a trip to Mars. With Artificial Intelligence, we are starting to also use the data-intensive inductive approach, enabled by the re-emergence of Machine Learning which has been fueled by decades of accumulated data.”

David Bader on Real World Challenges for Big Data Analytics

In this video from PASC18, David Bader from Georgia Tech summarizes his keynote talk on Big Data Analytics. “Emerging real-world graph problems include: detecting and preventing disease in human populations; revealing community structure in large social networks; and improving the resilience of the electric power grid. Unlike traditional applications in computational science and engineering, solving these social problems at scale often raises new challenges because of the sparsity and lack of locality in the data, the need for research on scalable algorithms, and development of frameworks for solving these real-world problems on high performance computers.”

Video: Large Scale Training for Model Optimization

Jakub Tomczak from the University of Amsterdam gave this talk at PASC18. “Deep generative models allow us to learn hidden representations of data and generate new examples. There are two major families of models that are exploited in current applications: Generative Adversarial Networks (GANs), and Variational Auto-Encoders (VAE). We will point out advantages and disadvantages of GANs and VAE. Some of most promising applications of deep generative models will be shown.”

Atomicus Chart Software Brings Easy Analytics to Scientists

ISV vendor Atomicus is approaching the release of an advanced software component AtomicusChart developed for scientists researching physical, chemical, biological phenomena and for those who need a convenient tool to present the results in scientific formats. “The ATOMICUS CHART is a product from Atomicus, which can be re-used and integrated in any software requiring high-speed graphics for large volumes of data (including big data) and dedicated to the needs of analytical applications.”

Massive-Scale Analytics Applied to Real-World Problems

David Bader from Georgia Tech gave this talk at PASC18. “Emerging real-world graph problems include: detecting and preventing disease in human populations; revealing community structure in large social networks; and improving the resilience of the electric power grid. Unlike traditional applications in computational science and engineering, solving these social problems at scale often raises new challenges because of the sparsity and lack of locality in the data, the need for research on scalable algorithms and development of frameworks for solving these real-world problems on high performance computers, and for improved models that capture the noise and bias inherent in the torrential data streams.”