Hyperion Research Expands Analyst Team

Today Hyperion Research announced that the company is staffing up with two new analysts for continuing growth and new business opportunities. The analyst firm provides thought leadership and practical guidance for users, vendors, and other members of the HPC community by focusing on key market and technology trends across government, industry, commerce, and academia.

Data Scientist Thomas Thurston to speak at HPC User Forum in New Jersey

Venture Capitalist and Data Scientist Thomas Thurston is slated to speak at the upcoming HPC User Forum in Princeton, New Jersey. Thurston will give a talk titled, “Using HPC-enabled AI to Guide Investment Strategies for Finding and Funding Startups.” Thurston will describe how his fund is using technology to gain unique insights into early startups that otherwise disclose little or no public data. His discussion highlights counter-intuitive insights about what is, and what isn’t, predictive of new business success, along with a discussion of current challenges of analyzing potential startup investments and how companies are grappling with the promises and perils of executive decision making in a world of increasingly advanced computing.

Video: Research on Blue Waters

Dr. Brett Bode from NCSA gave this talk at the HPC User Forum. “Blue Waters is one of the most powerful supercomputers in the world and is one of the fastest supercomputers on a university campus. Scientists and engineers across the country use the computing and data power of Blue Waters to tackle a wide range of challenging problems, from predicting the behavior of complex biological systems to simulating the evolution of the cosmos.”

Video: An Update on HPC at CSCS

Thomas Schulthess from CSCS gave this talk at the HPC User Forum. “CSCS has a strong track record in supporting the processing, analysis and storage of scientific data, and is investing heavily in new tools and computing systems to support data science applications. For more than a decade, CSCS has been involved in the analysis of the many petabytes of data produced by scientific instruments such as the Large Hadron Collider (LHC) at CERN. Supporting scientists in extracting knowledge from structured and unstructured data is a key priority for CSCS.”

Video: The Pawsey Supercomputing Centre, SKA, and HPC in Australia

Mark Stickells from the Pawsey Supercomputing Centre gave this talk at the HPC User Forum. “The Pawsey Supercomputing Centre is one of two, Tier-1, High Performance Computing facilities in Australia, whose primary function is to accelerate scientific research for the benefit of the nation. Our service and expertise in supercomputing, data, cloud services and visualisation, enables research across a spread of domains including astronomy, life sciences, medicine, energy, resources and artificial intelligence.”

Video: What Can HPC on AWS Do?

Ian Colle from Amazon gave this talk at the HPC User Forum. “AWS provides the most elastic and scalable cloud infrastructure to run your HPC applications. With virtually unlimited capacity, engineers, researchers, and HPC system owners can innovate beyond the limitations of on-premises HPC infrastructure. AWS delivers an integrated suite of services that provides everything needed to quickly and easily build and manage HPC clusters in the cloud to run the most compute intensive workloads across various industry verticals.”

Recent Research on using Public Clouds for HPC Workloads

Alex Norton from Hyperion Research gave this talk at the HPC User Forum. “Hyperion Research provides data-driven research, analysis and recommendations for technologies, applications, and markets in high performance computing and emerging technology areas to help organizations worldwide make effective decisions and seize growth opportunities. Research includes market sizing and forecasting, share tracking, segmentation, technology and related trend analysis, and both user & vendor analysis for multi-user technical server technology used for HPC and HPDA (high performance data analysis).”

Video: Exascale Computing Project Application Development

Andrew Siegel from Argonne gave this talk at the HPC User Forum. “The Exascale Computing Project is accelerating delivery of a capable exascale computing ecosystem for breakthroughs in scientific discovery, energy assurance, economic competitiveness, and national security. ECP is chartered with accelerating delivery of a capable exascale computing ecosystem to provide breakthrough modeling and simulation solutions to address the most critical challenges in scientific discovery, energy assurance, economic competitiveness, and national security.”

An Update on the European Processor Initiative

Jean-Marc Denis from EPI gave this talk at the HPC User Forum. “The EPI project aims to deliver a high-performance, low-power processor, implementing vector instructions and specific accelerators with high bandwidth memory access. The EPI processor will also meet high security and safety requirements. This will be achieved through intensive use of simulation, development of a complete software stack and tape-out in the most advanced semiconductor process node.”

How the Results of Summit and Sierra are Influencing Exascale

Al Geist from ORNL gave this talk at the HPC User Forum. “Two DOE national laboratories are now home to the fastest supercomputers in the world, according to the TOP500 List, a semiannual ranking of the world’s fastest computing systems. The IBM Summit system at Oak Ridge National Laboratory is currently ranked number one, while Lawrence Livermore National Laboratory’s IBM Sierra system has climbed to the number two spot.”