MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

12 International Student Teams to Face Off at HPCAC-ISC 2016 Student Cluster Competition


Today the HPC Advisory Council announced that 12 university teams from around the world will compete in the HPCAC-ISC 2016 Student Cluster Competition at ISC 2016 conference next June in Frankfurt.

Hewlett Packard Enterprise, Intel and PSC: Driving Innovation in HPC


In this video from SC15, Bill Mannel from HPE, Charlie Wuischpard from Intel, and Nick Nystrom from the Pittsburgh Supercomputing Center discuss their collaboration for High Performance Computing. Early next year, Hewlett Packard Enterprise will deploy the Bridges supercomputer based on Intel technology for breakthrough data centric computing at PSC. “Welcome to Bridges, a new concept in HPC – a system designed to support familiar, convenient software and environments for both traditional and non-traditional HPC users. It is a richly connected set of interacting systems offering a flexible mix of gateways (web portals), Hadoop and Spark ecosystems, batch processing and interactivity.”

Towards the Convergence of HPC and Big Data — Data-Centric Architecture at TACC


Dan Stanzione from TACC presented this talk at the DDN User Group at SC15. “TACC is an advanced computing research center that provides comprehensive advanced computing resources and support services to researchers in Texas and across the USA. The mission of TACC is to enable discoveries that advance science and society through the application of advanced computing technologies. Specializing in high performance computing, scientific visualization, data analysis & storage systems, software, research & development and portal interfaces, TACC deploys and operates advanced computational infrastructure to enable computational research activities of faculty, staff, and students of UT Austin.”

Three German Instititutes Deploy NEC’s SX-ACE Vector Supercomputers


Today NEC Corporation announced that SX-ACE vector supercomputers delivered to the University of Kiel, Alfred Wegener Institute, and the High Performance Computing Center Stuttgart have begun operating and contributing to research.

Rambus Advances Smart Data Acceleration Research Program with LANL


Last week at SC15, Rambus announced that it has partnered with Los Alamos National Laboratory (LANL) for evaluating elements of its Smart Data Acceleration (SDA) Research Program. The SDA platform has been deployed at LANL to improve the performance of in-memory databases, graph analytics and other Big Data applications.

Video: SC15 HPC Matters Plenary Session with Intel’s Diane Bryant

Diane Bryant, Intel

In this video from SC15, Intel’s Diane Bryant discusses how next-generation supercomputers are transforming HPC and presenting exciting opportunities to advance scientific research and discovery to deliver far-reaching impacts on society. As a frequent speaker on the future of technology, Bryant draws on her experience running Intel’s Data Center Group, which includes the HPC business segment, and products ranging from high-end co-processors for supercomputers to big data analytics solutions to high-density systems for the cloud.

Job of the Week: HPC Software Intern at Intel


Intel in Oregon is seeking an HPC Software Intern in our Job of the Week. “If you are interested in being on the team that builds the world’s fastest supercomputer, read on. Our team is designing how we integrate new HW and SW, validate extreme scale systems, and debug challenges that arise. The team consist of engineers who love to learn, love a good challenge, and aren’t afraid of a changing environment. We need someone who can help us with creating and executing codes that will be used to validate and debug our system from first Si bring-up through at-scale deployment. The successful candidate will have experience in the Linux environment creating code: C or Python. If you have the right skills, you will help build systems utilized by the best minds on the planet to solve grand challenge science problems such as climate research, bio-medical research, genome analysis, renewable energy, and other areas that require the world’s fastest supercomputers to tackle. Be part of the first to get to Exascale!”

Numascale Teams with Supermicro & AMD for Large Shared Memory System


Last week at SC15, Numascale announced the successful installation of a large shared memory Numascale/Supermicro/AMD system at a customer datacenter facility in North America. The system is the first part of a large cloud computing facility for analytics and simulation of sensor data combined with historical data. “The Numascale system, installed over the last two weeks, consists of 108 Supermicro 1U servers connected in a 3D torus with NumaConnect, using three cabinets with 36 servers apiece in a 6x6x3 topology. Each server has 48 cores in three AMD Opteron 6386 CPUs and 192 GBytes memory, providing a single system image and 20.7 TBytes to all 5184 cores. The system was designed to meet user demand for “very large memory” hardware solutions running a standard single image Linux OS on commodity x86 based servers.”

Podcast: Supercomputing the Deep Earth with the Gordon Bell Prize Winners


In this podcast, Jorge Salazar from TACC interviews two winners of the 2015 ACM Gordon Bell Prize, Omar Ghattas and Johann Rudi of the Institute for Computational Engineering and Sciences, UT Austin. As part of the discussion, Ghattas describes how parallelism and exascale computing will propel science forward.

Video: Hewlett Packard Enterprise HPC Customer Panel at SC15


In this video from SC15, Rich Brueckner from insideHPC moderates a panel discussion with Hewlett Packard Enterprise HPC customers. “Government labs, as well as public and private universities worldwide, are using HPE Compute solutions to conduct research across scientific disciplines, develop new drugs, discover renewable energy sources and bring supercomputing to nontraditional users and research communities.”