“The basic idea of deep learning is to automatically learn to represent data in multiple layers of increasing abstraction, thus helping to discover intricate structure in large datasets. NVIDIA has invested in SaturnV, a large GPU-accelerated cluster, (#28 on the November 2016 Top500 list) to support internal machine learning projects. After an introduction to deep learning on GPUs, we will address a selection of open questions programmers and users may face when using deep learning for their work on these clusters.”
The OpenSFS Lustre community has posted the Agenda for their upcoming LUG 2017 conference. The event takes place May 30 – June 2 in Bloomington, Indiana. The Lustre User Group (LUG) conference is the industry’s primary venue for discussion and seminars on the Lustre parallel file system and other open source file system technologies.” LUG provides […]
“This talk will focus on challenges in designing programming models and runtime environments for Exascale systems with millions of processors and accelerators to support various programming models. We will focus on MPI+X (PGAS – OpenSHMEM/UPC/CAF/UPC++, OpenMP, and CUDA) programming models by taking into account support for multi-core systems (KNL and OpenPower), high-performance networks, GPGPUs (including GPUDirect RDMA), and energy-awareness.”
“Nimbix has tremendous experience in GPU cloud computing, going all the way back to NVIDIA’s Fermi architecture,” said Steve Hebert, CEO of Nimbix. “We are looking forward to accelerating deep learning and analytics applications for customers seeking the latest generation GPU technology available in a public cloud.”
Costas Bekas from IBM Research Zurich presented this talk at the Switzerland HPC Conference. “IBM Research builds applications that enable humans to collaborate with powerful AI technologies to discover, analyze and tackle the world’s greatest challenges. Humans are on the cusp of augmenting their lives in extraordinary ways with AI. At IBM Research Labs around the globe, we envision and develop next-generation systems that work side-by side with humans, accelerating our ability to create, learn, make decisions and think.”
In this video, Dana Brunson from Oklahoma State describes the mission of the Oklahoma High Performance Computing Center. Formed in 2007, the HPCC facilitates computational and data-intensive research across a wide variety of disciplines by providing students, faculty and staff with cyberinfrastructure resources, cloud services, education and training, bioinformatics assistance, proposal support and collaboration.
The latest industrial vehicles – as with other areas of automotive design – often involve high-tech components composite components to assisted driving or vehicle automation systems which require significantly more complex simulation. Automotive design tasks frequently deal with contradictory requirements of this kind: “make something stronger while making it lighter,” explained Sjodin. “Simulations here can be invaluable since modern tools can be setup to sweep over a large range of cases, or to automatically optimize for a certain objective.”
“Iceotope’s novel approach to liquid cooling allows us to deliver compute capability for customers with environments outside the traditional air cooled datacentre – for example a factory shop floor or an office environment where standard servers are too noisy,” said Steve Reynolds, sales director at OCF. “Our partnership with Iceotope enables us to provide an alternative and innovative solution for our customers.”
DDN is helping the University of Edinburgh accelerate its genomics and other industry research. According to professor Mark Parsons, director of the Edinburgh Parallel Computing Center, DDN’s high-performance storage supports fast-growing genomics research while enabling multinational companies and smaller businesses to benefit from access to advanced technologies. “We’re entering a period of huge innovation both in HPC and storage,” he said.
Greg Casey from Dell EMC presented this talk at the OpenFabrics Workshop. “This session will focus on the new Gen-Z memory-semantic fabric. The speaker will show the audience why Gen-Z is needed, how Gen-Z operates, what is expected in first products that employ Gen-Z, and encourage participation in finalizing the Gen-Z specifications. Gen-Z will be connecting components inside of servers as well as connecting servers with pools of memory, storage, and acceleration devices through a switch environment.”