Last week at SC15, Rambus announced that it has partnered with Los Alamos National Laboratory (LANL) for evaluating elements of its Smart Data Acceleration (SDA) Research Program. The SDA platform has been deployed at LANL to improve the performance of in-memory databases, graph analytics and other Big Data applications.
Baidu’s Chief Scientist Andrew Ng has started a Social Media campaign for inspiring people to study Machine Learning. “Regardless of where you learned Machine Learning, if it has had an impact on you or your work, please share your story on Facebook or Twitter in a short written or video post. I will invite the people who shared the 5 most inspirational stories to join me in a conversation on Google Hangout about the future of machine learning.”
In this video from SC15, Intel’s Diane Bryant discusses how next-generation supercomputers are transforming HPC and presenting exciting opportunities to advance scientific research and discovery to deliver far-reaching impacts on society. As a frequent speaker on the future of technology, Bryant draws on her experience running Intel’s Data Center Group, which includes the HPC business segment, and products ranging from high-end co-processors for supercomputers to big data analytics solutions to high-density systems for the cloud.
Intel in Oregon is seeking an HPC Software Intern in our Job of the Week. “If you are interested in being on the team that builds the world’s fastest supercomputer, read on. Our team is designing how we integrate new HW and SW, validate extreme scale systems, and debug challenges that arise. The team consist of engineers who love to learn, love a good challenge, and aren’t afraid of a changing environment. We need someone who can help us with creating and executing codes that will be used to validate and debug our system from first Si bring-up through at-scale deployment. The successful candidate will have experience in the Linux environment creating code: C or Python. If you have the right skills, you will help build systems utilized by the best minds on the planet to solve grand challenge science problems such as climate research, bio-medical research, genome analysis, renewable energy, and other areas that require the world’s fastest supercomputers to tackle. Be part of the first to get to Exascale!”
Last week at SC15, Numascale announced the successful installation of a large shared memory Numascale/Supermicro/AMD system at a customer datacenter facility in North America. The system is the first part of a large cloud computing facility for analytics and simulation of sensor data combined with historical data. “The Numascale system, installed over the last two weeks, consists of 108 Supermicro 1U servers connected in a 3D torus with NumaConnect, using three cabinets with 36 servers apiece in a 6x6x3 topology. Each server has 48 cores in three AMD Opteron 6386 CPUs and 192 GBytes memory, providing a single system image and 20.7 TBytes to all 5184 cores. The system was designed to meet user demand for “very large memory” hardware solutions running a standard single image Linux OS on commodity x86 based servers.”
In this podcast, Jorge Salazar from TACC interviews two winners of the 2015 ACM Gordon Bell Prize, Omar Ghattas and Johann Rudi of the Institute for Computational Engineering and Sciences, UT Austin. As part of the discussion, Ghattas describes how parallelism and exascale computing will propel science forward.
In this video from SC15, Rich Brueckner from insideHPC moderates a panel discussion with Hewlett Packard Enterprise HPC customers. “Government labs, as well as public and private universities worldwide, are using HPE Compute solutions to conduct research across scientific disciplines, develop new drugs, discover renewable energy sources and bring supercomputing to nontraditional users and research communities.”
Today Russia’s RSC Group announced that Team TUMuch Phun from the Technical University of Munich (TUM) won the Highest Linpack Award in the SC15 Student Cluster Competition. The enthusiastic students achieved 7.1 Teraflops on the Linpack benchmark using an RSC PetaStream cluster with computing nodes based on Intel Xeon Phi. TUM student team took 3rd place in overall competition within 9 teams participated in SCC at SC15, so as only one European representative in this challenge.