IDC has published the agenda for their next HPC User Forum. The event will take place April 11-13 in Tucson, AZ. “Don’t miss the chance to hear top experts on these high-innovation, high-growth areas of the HPC market. At this meeting, you’ll also hear about government initiatives to get ready for future-generation supercomputers, machine learning, and High Performance Data Analytics.”
Leo Reiter from Nimbix presented this talk at the HPC User Forum. “Unlike conventional commodity cloud platforms, JARVICE and the Nimbix Cloud are purpose built to run any processing job at speed and scale. It means that as your problems get more complex, JARVICE simply expands to handle them.”
Christopher Lynnes from NASA presented this talk at the HPC User Forum. “The Earth Observing System Data and Information System is a key core capability in NASA’s Earth Science Data Systems Program. It provides end-to-end capabilities for managing NASA’s Earth science data from various sources—satellites, aircraft, field measurements, and various other programs.”
Joseph Lombardo from UNLV presented this talk at the PBS Works User Group. “Lombardo will highlight results from an Alzheimer’s research project that benefited from using PBS Professional. He will then describe the NSCEE’s new system at the Supernap and how this system can be used to advance research for HPC users in both academia/R&D and commercial industry. Lombardo will also highlight two emerging projects; the New School of Medicine and new Technology park.”
In this video from the Disruptive Technologies Session at the 2015 HPC User Forum, Intel’s Ralph Biesemeyer presents: Intel 3D XPoint Technology.
“For decades, the industry has searched for ways to reduce the lag time between the processor and data to allow much faster analysis,” said Rob Crooke, senior vice president and general manager of Intel’s Non-Volatile Memory Solutions Group. “This new class of non-volatile memory achieves this goal and brings game-changing performance to memory and storage solutions.”
“DMF has been protecting data in some of the industries largest virtualized environments all over the world, enabling them to maintain uninterrupted online access to data for more than 20 years. Some customers have installations with over 100PB online data capacity, and billions of files, which they are able to manage at a fraction of the cost of conventional online architectures.”
“The HP Apollo 8000 supercomputing platform approaches HPC from an entirely new perspective as the system is cooled directly with warm water. This is done through a “dry-disconnect” cooling concept that has been implemented with the simple but efficient use of heat pipes. Unlike cooling fans, which are designed for maximum load, the heat pipes can be optimized by administrators. The approach allows significantly greater performance density, cutting energy consumption in half and creating synergies with other building energy systems, relative to a strictly air-cooled system.”
“NOAA will acquire software engineering support and associated tools to re-architect NOAA’s applications to run efficiently on next generation fine-grain HPC architectures. From a recent procurement document: “Finegrain architecture (FGA) is defined as: a processing unit that supports more than 60 concurrent threads in hardware (e.g. GPU or a large core-count device).”
Bob Sorensen from IDC presented this talk at the HPC User Forum. In a recent study, IDC assessed the EU’s progress towards their 2012 action plan and made recommendations for funding exascale systems and fostering industrial HPC in the coming decade.
Earl Joseph from IDC presented this talk at the HPC User Forum. “This study investigates how HPC investments can improve economic success and increase scientific innovation. This research is focused on the common good and should be useful to DOE, other government agencies, industry, and academia.”