“We’re now providing LSF in the Cloud as a service to our customers because their workloads are getting larger over time, they’re converging– HPC is not converging with Analytics and even tough they provision for their average load, they can never provision for the spikes or for new projects. So we’re helping our clients out by providing the services in the Cloud, where they can get LSF or Platform Symphony, or Spectrum Scale.”
Dan Stanzione from TACC presented this talk at the DDN User Group at SC15. “TACC is an advanced computing research center that provides comprehensive advanced computing resources and support services to researchers in Texas and across the USA. The mission of TACC is to enable discoveries that advance science and society through the application of advanced computing technologies. Specializing in high performance computing, scientific visualization, data analysis & storage systems, software, research & development and portal interfaces, TACC deploys and operates advanced computational infrastructure to enable computational research activities of faculty, staff, and students of UT Austin.”
The HPC Advisory Council Stanford Conference 2016 has issued its Call for Participation. The event will take place Feb 24-25, 2016 on the Stanford University campus at the new Jen-Hsun Huang Engineering Center. “The HPC Advisory Council Stanford Conference 2016 will focus on High-Performance Computing usage models and benefits, the future of supercomputing, latest technology developments, best practices and advanced HPC topics. In addition, there will be a strong focus on new topics such as Machine Learning and Big Data. The conference is open to the public free of charge and will bring together system managers, researchers, developers, computational scientists and industry affiliates.”
“We have enabled virtualization for HPC but it’s important to bring the benefits of virtualization to end researchers in a way they can use it, right? So what we have done is we have created the solution plus VMware High-Performance Analytics, which allows researchers to author their own workloads, they can collaborate it, they can clone it, then they can share it with other researchers. And they can modify their workload – they can fine tune it.”
In this video from SC15, Intel’s Diane Bryant discusses how next-generation supercomputers are transforming HPC and presenting exciting opportunities to advance scientific research and discovery to deliver far-reaching impacts on society. As a frequent speaker on the future of technology, Bryant draws on her experience running Intel’s Data Center Group, which includes the HPC business segment, and products ranging from high-end co-processors for supercomputers to big data analytics solutions to high-density systems for the cloud.
Intel in Oregon is seeking an HPC Software Intern in our Job of the Week. “If you are interested in being on the team that builds the world’s fastest supercomputer, read on. Our team is designing how we integrate new HW and SW, validate extreme scale systems, and debug challenges that arise. The team consist of engineers who love to learn, love a good challenge, and aren’t afraid of a changing environment. We need someone who can help us with creating and executing codes that will be used to validate and debug our system from first Si bring-up through at-scale deployment. The successful candidate will have experience in the Linux environment creating code: C or Python. If you have the right skills, you will help build systems utilized by the best minds on the planet to solve grand challenge science problems such as climate research, bio-medical research, genome analysis, renewable energy, and other areas that require the world’s fastest supercomputers to tackle. Be part of the first to get to Exascale!”
Last week at SC15, Numascale announced the successful installation of a large shared memory Numascale/Supermicro/AMD system at a customer datacenter facility in North America. The system is the first part of a large cloud computing facility for analytics and simulation of sensor data combined with historical data. “The Numascale system, installed over the last two weeks, consists of 108 Supermicro 1U servers connected in a 3D torus with NumaConnect, using three cabinets with 36 servers apiece in a 6x6x3 topology. Each server has 48 cores in three AMD Opteron 6386 CPUs and 192 GBytes memory, providing a single system image and 20.7 TBytes to all 5184 cores. The system was designed to meet user demand for “very large memory” hardware solutions running a standard single image Linux OS on commodity x86 based servers.”
In this podcast, Jorge Salazar from TACC interviews two winners of the 2015 ACM Gordon Bell Prize, Omar Ghattas and Johann Rudi of the Institute for Computational Engineering and Sciences, UT Austin. As part of the discussion, Ghattas describes how parallelism and exascale computing will propel science forward.
In this video from SC15, Rich Brueckner from insideHPC moderates a panel discussion with Hewlett Packard Enterprise HPC customers. “Government labs, as well as public and private universities worldwide, are using HPE Compute solutions to conduct research across scientific disciplines, develop new drugs, discover renewable energy sources and bring supercomputing to nontraditional users and research communities.”