Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

Liqid Showcases Composable Infrastructure for GPUs at GTC 2017

“The Liqid Composable Infrastructure (CI) Platform is the first solution to support GPUs as a dynamic, assignable, bare-metal resource. With the addition of graphics processing, the Liqid CI Platform delivers the industry’s most fully realized approach to composable infrastructure architecture. With this technology, disaggregated pools of compute, networking, data storage and graphics processing elements can be deployed on demand as bare-metal resources and instantly repurposed when infrastructure needs change.”

Introduction to Parallel Programming with OpenACC – Part 2

In this video, Michael Wolfe from PGI continues his series of tutorials on parallel programming. “The second in a series of short videos to introduce you to parallel programming with OpenACC and the PGI compilers, using C++ or Fortran. You will learn by example how to build a simple example program, how to add OpenACC directives, and to rebuild the program for parallel execution on a multicore system. To get the most out of this video, you should download the example programs and follow along on your workstation.”

One Stop Systems Showcases HPC as a Service at GTC 2017

In this video from GTC 2017, Jaan Mannik from One Stop Systems describes the company’s new HPC as a Service offering. As makers of high density GPU expansion chassis, One Stop Systems designs and manufactures high performance computing systems that revolutionize the data center by increasing speed to the Internet while reducing cost and impact to the infrastructure.

HPE Introduces the World’s Largest Single-memory Computer

Hewlett Packard Enterprise today introduced the world’s largest single-memory computer, the latest milestone in The Machine research project. “The prototype unveiled today contains 160 terabytes (TB) of memory, capable of simultaneously working with the data held in every book in the Library of Congress five times over—or approximately 160 million books. It has never been possible to hold and manipulate whole data sets of this size in a single-memory system, and this is just a glimpse of the immense potential of Memory-Driven Computing.”

Livestream: 2017 MSST Mass Storage Conference

We are very excited to bring you this livestream of the 2017 MSST Conference in Santa Clara. We’ll be broadcasting all the talks Wednesday, May 17 starting at 8:30am PDT.

Lorena Barba Presents: Data Science for All

“In this new world, every citizen needs data science literacy. UC Berkeley is leading the way on broad curricular immersion with data science, and other universities will soon follow suit. The definitive data science curriculum has not been written, but the guiding principles are computational thinking, statistical inference, and making decisions based on data. “Bootcamp” courses don’t take this approach, focusing mostly on technical skills (programming, visualization, using packages). At many computer science departments, on the other hand, machine-learning courses with multiple pre-requisites are only accessible to majors. The key of Berkeley’s model is that it truly aims to be “Data Science for All.”

Video: ARM HPC Ecosystem

Darren Cepulis from ARM gave this talk at the HPC User Forum. “ARM delivers enabling technology behind HPC. The 64-bit design of the ARMv8-A architecture combined with Advanced SIMD vectorization are ideal to enable large scientific computing calculations to be executed efficiently on ARM HPC machines. In addition ARM and its partners are working to ensure that all the software tools and libraries, needed by both users and systems administrators, are provided in readily available, optimized packages.”

OpenHPC: A Comprehensive System Software Stack

Bob Wisniewski from Intel presents: OpenHPC: A Cohesive and Comprehensive System Software Stack. “OpenHPC is a collaborative, community effort that initiated from a desire to aggregate a number of common ingredients required to deploy and manage High Performance Computing (HPC) Linux clusters including provisioning tools, resource management, I/O clients, development tools, and a variety of scientific libraries.

Leveraging HPC for Real-Time Quantitative Magnetic Resonance Imaging

W. Joe Allen from TACC gave this talk at the HPC User Forum. “The Agave Platform brings the power of high-performance computing into the clinic,” said William (Joe) Allen, a life science researcher for TACC and lead author on the paper. “This gives radiologists and other clinical staff the means to provide real-time quality control, precision medicine, and overall better care to the patient.”

Leaping Forward in Energy Efficiency with the DOME 64-bit μDataCenter

In this slidecast, Ronald P. Luijten from IBM Research in Zurich presents: DOME 64-bit μDataCenter. “I like to call it a data­cen­ter in a shoe­box. With the com­bi­na­tion of power and ener­gy ef­fi­cien­cy, we be­lieve the mi­cro­serv­er will be of in­te­rest be­yond the DOME pro­ject, par­tic­u­lar­ly for cloud data centers and Big Data ana­ly­tics ap­pli­ca­tions.”