Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Pascal GPUs to Accelerate TSUBAME 3.0 Supercomputer at Tokyo Tech

“TSUBAME3.0 is expected to deliver more than two times the performance of its predecessor, TSUBAME2.5,” writes Marc Hamilton from Nvidia. “It will use Pascal-based Tesla P100 GPUs, which are nearly three times as efficient as their predecessors, to reach an expected 12.2 petaflops of double precision performance. That would rank it among the world’s 10 fastest systems according to the latest TOP500 list, released in November. TSUBAME3.0 will excel in AI computation, expected to deliver more than 47 PFLOPS of AI horsepower. When operated concurrently with TSUBAME2.5, it is expected to deliver 64.3 PFLOPS, making it Japan’s highest performing AI supercomputer.”

Overcoming the Learning Curve of New Processor Architectures

High-performance computing (HPC) tools are helping financial firms survive and thrive in this highly demanding and data-intensive industry. As financial models grow in complexity and greater amounts of data must be processed and analyzed on a daily basis, firms are increasingly turning to HPC solutions to exploit the latest technology performance improvements. Suresh Aswani, Senior Manager, Solutions Marketing, at Hewlett Packard Enterprise, shares how to overcome the learning curve of new processor architectures.

insideBIGDATA Guide to Use of Big Data on an Industrial Scale

In this document, our focus is on “industrializing” big data infrastructure—bringing operational maturity to the Hadoop data ecosystem, making it easier and cost-effective to deploy at enterprise scale, and moving companies from the proof of concept stage into production-ready deployments. Download this Guide to Big Data on an Industrial Scale to learn more.

Remote Visualization Accelerating Innovation Across Multiple Industries

Remote visualization tools allow employees to dramatically improve productivity by accessing business-critical data and programs regardless of their location. Remote visualization technologies allow users to launch software applications on the server side and display the results locally, letting them leverage the bandwidth and compute power of the cluster while circumventing the latency and security risks of downloading large amounts of data onto their local client.

Artificial Intelligence Becomes More Accessible

With the advent of heterogeneous computing systems that combine both main CPUs and connected processors that can ingest and process tremendous amounts of data and run complex algorithms, artificial intelligence (AI) technologies are beginning to take hold in a variety of industries. Massive datasets can now be used to drive innovation in industries such as autonomous driving systems, controlling power grids and combining data to arrive at a profitable decision, for example. Read how AI can now be used in various industries using the latest hardware and software.

HPE IDOL Machine Learning Engine Adds Natural Language Processing

“Building on HPE IDOL’s history of delivering industry-leading analytics engineered for human data, IDOL Natural Language Question Answering is the industry’s first comprehensive approach to delivering enterprise class answers,” said Sean Blanchflower, vice president of engineering, Big Data Platform, Hewlett Packard Enterprise. “Designed to meet the demanding needs of data-driven enterprises, this new, language-independent capability can enhance applications with machine learning powered natural language exchan

Accelerating the speed and accessibility of artificial intelligence technologies

As AI technologies become even faster and more accessible, the computing community will be positioned to help organizations achieve the desired levels of efficiency that are critically needed in order to resolve the world’s most complex problems, and increase safety, productivity, and prosperity. Learn more about AI Technologies … download this white paper.

Interview: Bill Mannel and Dr. Eng Lim Goh on What’s Next for HPE & SGI

In this video, Bill Mannel, VP & GM, High-Performance Computing and Big Data, HPE & Dr. Eng Lim GoH, PhD, SVP & CTO of SGI join Dave Vellante & Paul Gillin at HPE Discover 2016. “The combined HPE and SGI portfolio, including a comprehensive services capability, will support private and public sector customers seeking larger high-performance computing installations, including U.S. federal agencies as well as enterprises looking to leverage high-performance computing for business insights and a competitive edge.”

HPE Apollo 6500 for Deep Learning

“With up to eight high performance NVIDIA GPUs designed for maximum transfer bandwidth, the HPE Apollo 6500 is purpose-built for HPC and deep learning applications. Its high ratio of GPUs to CPUs, dense 4U form factor and efficient design enable organizations to run deep learning recommendation algorithms faster and more efficiently, significantly reducing model training time and accelerating the delivery of real-time results, all while controlling costs.”

NIH Powers Biowulf Cluster with Mellanox EDR 100Gb/s InfiniBand

Today Mellanox announced that NIH, the U.S. National Institute of Health’s Center for Information Technology, has selected Mellanox 100G EDR InfiniBand solutions to accelerate Biowulf, the largest data center at NIH. The project is a result of a collaborative effort between Mellanox, CSRA, Inc., DDN, and Hewlett Packard Enterprise. “The Biowulf cluster is NIH’s core HPC facility, with more than 55,000 cores. More than 600 users from 24 NIH institutes and centers will leverage the new supercomputer to enhance their computationally intensive research.”