MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

Advancing HPC with Collaboration & Co-design


In this special guest feature from Scientific Computing World, Tom Wilkie reports on two US initiatives for future supercomputers, announced at the ISC in Frankfurt in July.

TACC Puts Chameleon Cloud Testbed into Production

Kate Keahey, Chameleon primary investigator, computer scientist at Argonne National Laboratory, and CI Senior Fellow

Today the Texas Advanced Computing Center (TACC) announced that its new Chameleon testbed is in full production for researchers across the country. Designed to help investigate and develop the promising future of cloud-based science, the NSF-funded Chameleon is a configurable, large-scale environment for testing and demonstrate new concepts.

Intel and Micron Announce 3D XPoint Non-Volatile Memory

3D Xpoint™ technology is up to 1000x faster than NAND and an individual die can store 128Gb of data.

Today Intel Corporation and Micron Technology unveiled 3D XPoint technology, a non-volatile memory that has the potential to revolutionize any device, application or service that benefits from fast access to large sets of data. Now in production, 3D XPoint technology is a major breakthrough in memory process technology and the first new memory category since the introduction of NAND flash in 1989.

Univa Joins CNCF Cloud Native Computing Foundation


Today Univa joins Google, IBM, and other world-class companies founding members of the Cloud Native Computing Foundation (CNCF). The new CNCF organization will accelerate the development of cloud native applications and services by advancing a technology stack for data center containerization and microservices.

IBM and NVIDIA Launch Centers of Excellence at ORNL and LLNL


Today IBM along with Nvidia and two U.S. Department of Energy National Laboratories today announced a pair of Centers of Excellence for supercomputing – one at the Lawrence Livermore National Laboratory and the other at the Oak Ridge National Laboratory. The collaborations are in support of IBM’s supercomputing contract with the U.S. Department of Energy. They will enable advanced, large-scale scientific and engineering applications both for supporting DOE missions, and for the Summit and Sierra supercomputer systems to be delivered respectively to Oak Ridge and Lawrence Livermore in 2017 and to be operational in 2018.

Radio Free HPC Looks at Supercomputing Global Flood Maps


In this podcast, the Radio Free HPC team looks at how the KatRisk startup is using GPUs on the Titan supercomputer to calculate global flood maps. “KatRisk develops event-based probabilistic models to quantify portfolio aggregate losses and exceeding probability curves. Their goal is to develop models that fully correlate all sources of flood loss including explicit consideration of tropical cyclone rainfall and storm surge.”

IBM Research Alliance Develops First 7nm Node


Today IBM Research announced that working with alliance partners at SUNY Polytechnic Institute’s Colleges of Nanoscale Science and Engineering it has produced the semiconductor industry’s first 7nm node test chips with functional transistors. According to IBM, the breakthrough underscores the company’s continued leadership and long-term commitment to semiconductor technology research.

Rolls-Royce Joins JISC Industrial Supercomputing Initiative


Today JISC in the U.K. announced that Rolls-Royce is the first company to join its industrial supercomputing initiative. Designed to break down barriers between industry and academia, JISC will provide Rolls-Royce with easy access to supercomputing equipment at the Engineering and Physical Sciences Research Council (EPSRC) HPC Midlands.

Top HPC Centers Meet in Barcelona at JLESC

JLESC 2015: 3rd Joint Laboratory for Extreme-Scale Computing

Top researchers from six of the largest supercomputing centers got together in Barcelona at the beginning of this month for the Joint Laboratory for Extreme‐Scale Computing (JLESC) to discuss the challenges for future supercomputers.

insideBIGDATA Guide to Scientific Research

HPC Life Sciences

Daniel Gutierrez, Managing Editor, of insideBIGDATA has put together a terrific Guide to Scientific Research. The goal of this paper is to provide a road map for scientific researchers wishing to capitalize on the rapid growth of big data technology for collecting, transforming, analyzing, and visualizing large scientific data sets.