MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


GENCI to Collaborate with IBM in Race to Exascale

world

Today GENCI announced a collaboration with IBM aimed at speeding up the path to exascale computing. “The collaboration, planned to run for at least 18 months, focuses on readying complex scientific applications for systems under development expected to achieve more than 100 petaflops, a solid step forward on the path to exascale. Working closely with supercomputing experts from IBM, GENCI will have access to some of the most advanced high performance computing technologies stemming from the rapidly expanding OpenPOWER ecosystem.”

Accelerating Science with SciDB from NERSC

SciDB harnesses parallel architectures for fast analysis of terabyte (TBs) arrays of scientific data. This collage illustrates some of the scientific areas that have benefited from NERSC's implementation of SciDB, including astronomy, biology and climate. (Image Credit: Yushu Yao, Berkeley Lab)

Over at NERSC, Linda Vu writes that the SciDB open source database system is a powerful tool for helping scientists wrangle Big Data. “SciDB is an open source database system designed to store and analyze extremely large array-structured data—like pictures from light sources and telescopes, time-series data collected from sensors, spectral data produced by spectrometers and spectrographs, and graph-like structures that illustrate relationships between entities.”

From Grand Challenges to Critical Workflows

Grand-Challenge-Blog-Part1

Geert Wenes writes in the Cray Blog that the next generation of Grand Challenges will focus on critical workflows for Exascale. “For every historical HPC grand challenge application, there is now a critical dependency on a series of other processing and analysis steps, data movement and communications that goes well beyond the pre- and post-processing of yore. It is iterative, sometimes synchronous (in situ) and generally more on an equal footing with the “main” application.”

SDSC Gets One-year Extension for Gordon Supercomputer

hpc_gordon_body

The National Science Foundation has awarded the San Diego Supercomputer Center (SDSC) a one-year extension to continue operating its Gordon supercomputer, providing continued access to the cluster for a wide range of researchers with data-intensive projects.

IBTA Publishes RoCE Interoperability List from Plugfest

RoCE

Today the InfiniBand Trade Association (IBTA) announced the completion of the first Plugfest for RDMA over Converged Ethernet (RoCE) solutions and the publication of the RoCE Interoperability List on the IBTA website. Fifteen member companies participated, bringing their RoCE adapters, cables and switches for testing to the event. Products that successfully passed the testing have been added to the RoCE Interoperability List.

KTH in Sweden Moves to EDR 100Gb/s InfiniBand

2392063474_7bddae5d8b_b1-625x469

Today Mellanox announced its EDR 100Gb/s InfiniBand solutions have been selected by the KTH Royal Institute of Technology for use in their PDC Center for High Performance Computing. Mellanox’s robust and flexible EDR InfiniBand solution offers higher interconnect speed, lower latency and smart accelerations to maximize efficiency and will enable the PDC Center to achieve world-leading data center performance across a variety of applications, including advanced modeling for climate changes, brain functions and protein-drug interactions.

Dell Opens Line of Business for Hyperscale Datacenters

dell

Today Dell announced a new business unit aligned around hyperscale datacenters. “The Datacenter Scalable Solutions (DSS) group is designed to meet the specific needs of web tech, telecommunications service providers, hosting companies, oil and gas, and research organizations. These businesses often have high-volume technology needs and supply chain requirements in order to deliver business innovation. With a new operating model built on agile, scalable, and repeatable processes, Dell can now uniquely provide this set of customers with the technology they need, purposefully designed to their specifications, and delivered when they want it.”

Titan Supercomputer Powers the Future of Forecasting

ecmwf

Knowing how the weather will behave in the near future is indispensable for countless human endeavors. Now, researchers at ECMWF are leveraging the computational power of the Titan supercomputer at Oak Ridge to improve weather forecasting.

SUPER Project Aims at Efficient Supercomputing for Scientists

super

“SUPER builds on past successes and now includes research into performance auto-tuning, energy efficiency, resilience, multi-objective optimization, and end-to-end tool integration. Leading the project dovetails neatly with Oliker’s research interests, which include optimization of scientific methods on emerging multi-core systems, ultra-efficient designs of domain-optimized computational platforms and performance evaluation of extreme-scale applications on leading supercomputers.”

Research Demands More Compute Power and Faster Storage for Complex Computational Applications

1

Many Universities, private research labs and government research agencies have begun using High Performance Computing (HPC) servers, compute accelerators and flash storage arrays to accelerate a wide array of research among disciplines in math, science and engineering. These labs utilize GPUs for parallel processing and flash memory for storing large datasets. Many universities have HPC labs that are available for students and researchers to share resources in order to analyze and store vast amounts of data more quickly.