• Oracle Cloud Speeds HPC & Ai Workloads at GTC 2019

    In this video from the GPU Technology Conference, Karan Batta from Oracle describes how the company provides HPC and Machine Learning in the Cloud with Bare Metal speed. ” Oracle Cloud Infrastructure offers wide-ranging support for NVIDIA GPUs, including the high-performance NVIDIA Tesla P100 and V100 GPU instances that provide the highest ratio of CPU cores and RAM per GPU available. With a maximum of 52 physical CPU cores, 8 NVIDIA Volta V100 units per bare metal server, 768 GB of memory, and two 25 Gbps interfaces, these are the most powerful GPU instances on the market.” [READ MORE…]

Featured Stories

  • CPU, GPU, FGPA, or DSP: Heterogeneous Computing Multiplies the Processing Power

    Heterogeneous Computing

    Whether your code will run on industry-standard PCs or is embedded in devices for specific uses, chances are there’s more than one processor that you can utilize. Graphics processors, DSPs and other hardware accelerators often sit idle while CPUs crank away at code better served elsewhere. This sponsored post from Intel highlights the potential of Intel SDK for OpenCL Applications, which can ramp up processing power. [READ MORE…]

  • ExaLearn Project to bring Machine Learning to Exascale

    As supercomputers become ever more capable in their march toward exascale levels of performance, scientists can run increasingly detailed and accurate simulations to study problems ranging from cleaner combustion to the nature of the universe. Enter ExaLearn, a new machine learning project supported by DOE’s Exascale Computing Project (ECP), aims to develop new tools to help scientists overcome this challenge by applying machine learning to very large experimental datasets and simulations.  [READ MORE…]

  • IonQ posts benchmarks for quantum computer

    Today quantum computing startup IonQ released the results of two rigorous real-world tests that show that its quantum computer can solve significantly more complex problems with greater accuracy than results published for any other quantum computer. “The real test of any computer is what can it do in a real-world setting. We challenged our machine with tough versions of two well-known algorithms that demonstrate the advantages of quantum computing over conventional devices. The IonQ quantum computer proved it could handle them. Practical benchmarks like these are what we need to see throughout the industry.” [READ MORE…]

Featured Resource

machine learning

Lenovo Enables Solutions for the ‘New HPC’

The new HPC, inclusive of analytics and AI, and with its wide range of technology components and choices, presents significant challenges to a commercial enterprise. Ultimately, the new HPC brings about opportunities that are worth the challenges. This whitepaper from Lenovo outlines new markets, workloads and technologies in HPC, as well as how its own products and tech are addressing the solution needs of the “new HPC.”

Recent News

Industry Perspectives

  • Video: Cray Announces First Exascale System

    In this video, Cray CEO Pete Ungaro announces Aurora – Argonne National Laboratory’s forthcoming supercomputer and the United States’ first exascale system. Ungaro offers some insight on the technology, what makes exascale performance possible, and why we’re going to need it. “It is an exciting testament to Shasta’s flexible design and unique system and software capabilities, along with our Slingshot interconnect, which will be the foundation for Argonne’s extreme-scale science endeavors and data-centric workloads. Shasta is designed for this transformative exascale era and the convergence of artificial intelligence, analytics and modeling and simulation– all at the same time on the same system — at incredible scale.” [READ MORE…]

  • AI Critical Measures: Time to Value and Insights

    AI

    AI is a game changer for industries today but achieving AI success contains two critical factors to consider — time to value and time to insights.  Time to value is the metric that looks at the time it takes to realize the value of a product, solution or offering. Time to insight is a key measure for how long it takes to gain value from use of the product, solution or offering. [READ MORE…]

RSS Featured from insideBIGDATA

  • SAS Announces $1 Billion Investment in AI at #GTC19
    SAS, driving the future of analytics, is investing $1 billion in AI over the next three years through software innovation, education, expert services and more. The commitment builds on SAS’ already strong foundation in AI which includes advanced analytics, machine learning, deep learning, natural language processing (NLP) and computer vision. Educational programs and expert services […]

Editor’s Choice

  • PASC19 Preview: Brueckner and Dr. Curfman-McInnes to Moderate Exascale Panel Discussion

    Today the PASC19 Conference announced that Dr. Lois Curfman McInnes from Argonne and Rich Brueckner from insideHPC will moderate a panel discussion with thought leaders focused on software challenges for Exascale and beyond. “In this session, Lois Curfman McInnes from Argonne National Laboratory and Rich Brueckner from insideHPC will moderate a panel discussion with thought leaders focused on software challenges for Exascale and beyond – mixing “big picture” and technical discussions. McInnes will bring her unique perspective on emerging Exascale software ecosystems to the table, while Brueckner will illustrate the benefits of Exascale to world-wide audiences.” [READ MORE…]

  • How Mellanox SHARP technology speeds Ai workloads

    Mellanox Scalable Hierarchical Aggregation and Reduction Protocol (SHARP) technology improves upon the performance of MPI operations by offloading collective operations from the CPU to the switch network, and by eliminating the need to send data multiple times between endpoints. This innovative approach decreases the amount of data traversing the network as aggregation nodes are reached, and dramatically reduces the MPI operations time. Implementing collective communication algorithms in the network also has additional benefits, such as freeing up valuable CPU resources for computation rather than using them to process communication.” [READ MORE…]

  • Video: The Game Changing Post-K Supercomputer for HPC, Big Data, and Ai

    Satoshi Matsuoka from RIKEN gave this talk at the Rice Oil & Gas Conference. “Rather than to focus on double precision flops that are of lesser utility, rather Post-K, especially its Arm64fx processor and the Tofu-D network is designed to sustain extreme bandwidth on realistic applications including those for oil and gas, such as seismic wave propagation, CFD, as well as structural codes, besting its rivals by several factors in measured performance. Post-K is slated to perform 100 times faster on some key applications c.f. its predecessor, the K-Computer, but also will likely to be the premier big data and AI/ML infrastructure.” [READ MORE…]

  • NVIDIA to Purchase Mellanox for $6.9 Billion

    Today NVIDIA announced plans to acquire Mellanox for approximately $6.9 billion. The acquisition will unite two of the world’s leading companies in HPC. Together, NVIDIA’s computing platform and Mellanox’s interconnects power over 250 of the world’s TOP500 supercomputers and have as customers every major cloud service provider and computer maker. [READ MORE…]

  • Watch Now: the @insideHPC Documentary about Dogs and the People Who Love Them

    I would like share this very special film with all my readers who happen to be Dog lovers. “Once there was a Giant” is a short documentary film about Dogs and the Dog Owners who love them. Written and Directed by Richard Brueckner, this film premiered in Portland on Halloween and is now live on YouTube. [READ MORE…]

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC: