In this video from SC15, Bryan Catanzaro, senior researcher in Baidu Research’s Silicon Valley AI Lab describes AI projects at Baidu and how the team uses HPC to scale deep learning. Advancements in High Performance Computing are enabling researchers worldwide to make great progress in AI.
Today Nvidia announced that Facebook will power its next-generation computing system with Tesla GPUs, enabling a broad range of new machine learning applications.
“How can quants or financial engineers write financial analytics libraries that can be systematically efficiently deployed on an Intel Xeon Phi co-processor or an Intel Xeon multi-core processor without specialist knowledge of parallel programming? A tried and tested approach to obtaining efficient deployment on many-core architectures is to exploit the highest level of granularity of parallelism exhibited by an application. However, this approach may require exploiting domain knowledge to efficiently map the workload to all cores. Using representative examples in financial modeling, this talk will show how the use of Our Pattern Language (OPL) can be used to formalize this knowledge and ensure that the domains of concerns for modeling and mapping the computations to the architecture are delineated. We proceed to describe work in progress on an Intel Xeon Phi implementation of Quantlib, a popular open-source quantitative finance library.”
“Modeling and simulation have been the primary usage of high performance computing (HPC). But the world is changing. We now see the need for rapid, accurate insights from large amounts of data. To accomplish this, HPC technology is repurposed. Likewise the location where the work gets done is not entirely the same either. Many workloads are migrating to massive cloud data centers because of the speed of execution. In this panel, leaders in computing will share how they, and others, integrate tradition and innovation (HPC technologies, Big Data analytics, and Cloud Computing) to achieve more discoveries and drive business outcomes.”
The democratization of HPC got a major boost last year with the announcement of an NSF award to the Pittsburgh Supercomputing Center. The $9.65 million grant for the development of Bridges, a new supercomputer designed to serve a wide variety of scientists, will open the door to users who have not had access to HPC until now. “Bridges is designed to close three important gaps: bringing HPC to new communities, merging HPC with Big Data, and integrating national cyberinfrastructure with campus resources. To do that, we developed a unique architecture featuring Hewlett Packard Enterprise (HPE) large-memory servers including HPE Integrity Superdome X, HPE ProLiant DL580, and HPE Apollo 2000. Bridges is interconnected by Intel Omni-Path Architecture fabric, deployed in a custom topology for Bridges’ anticipated workloads.”
In this video from SC15, Patrick Wolfe from the Alan Turing Institute and Karl Solchenbach from Intel describe a strategic partnership to deliver a research program focussed on HPC and data analytics. Created to promote the development and use of advanced mathematics, computer science, algorithms and big data for human benefit, the Alan Turing Institute is a joint venture between the universities of Warwick, Cambridge, Edinburgh, Oxford, UCL and EPSRC.
The University of Toronto is the official winner of Nvidia’s Compute the Cure initiative for 2015. Compute the Cure is a strategic philanthropic initiative of the Nvidia Foundation that aims to advance the fight against cancer. Through grants and employee fundraising efforts, Nvidia has donated more than $2,000,000 to cancer causes since 2011. Researchers from the […]
“What we’re previewing here today is a capability to have an overarching software, resource scheduler and workflow manager that takes all of these disparate sources and unifies them into a single view, making hundreds or thousands of computers look like one, and allowing you to run multiple instances of Spark. We have a very strong Spark multitenancy capability, so you can run multiple instances of Spark simultaneously, and you can run different versions of Spark, so you don’t obligate your organization to upgrade in lockstep.”
“Ngenea’s blazingly-fast on-premises storage stores frequently accessed active data on the industry’s leading high performance file system, IBM Spectrum Scale (GPFS). Less frequently accessed data, including backup, archival data and data targeted to be shared globally, is directed to cloud storage based on predefined policies such as age, time of last access, frequency of access, project, subject, study or data source. Ngenea can direct data to specific cloud storage regions around the world to facilitate remote low latency data access and empower global collaboration.”
At SC15, 1degreenorth announced plans to build an on-demand High Performance Computing Big DataAnalytics (“HPC-BDA”) infrastructure the National Supercomputing Center (NSCC) Singapore. The prototype will be used for experimentation and proof-of-concept projects by the big data and data science community in Singapore.