In this video from SUSECON 2016, Jo Harris from SUSE sits down with Dr. Figen Ulgen, GM HPC Software and Cloud at Intel to discuss women in Open Source and HPC, how Intel is contributing to this initiative, and the need for more women in the field.
“Billed as an exposition into ‘The Future of Cloud HPC Simulation,’ the event brought together experts in high-performance computing and simulation, cloud computing technologists, startup founders, and VC investors across the technology landscape. In addition to product demonstrations with Rescale engineers, including the popular Deep Learning workshop led by Mark Whitney, Rescale Director of Algorithms, booths featuring ANSYS, Microsoft Azure, Data Collective, and Microsoft Ventures offered interactive sessions for attendees.”
Dr. Umit Catalyurek from Georgia Institute of Technology presented this talk as part of the USC Big Data to Knowledge series. “This lecture will be a brief crash course on computer architecture, high performance computing and parallel computing. We will, again very briefly, discuss how to classify computer architectures and applications, and what to look in applications to achieve best performance on different architectures.”
Dr. Amit Seti from IIT-Gauwhati presented this talk at GTCx in India. “This talk will cover how medical imaging data can be used to train computer vision systems that automate diagnostic analysis in current clinical practice. Not only that, with more creative use of data, we can go even beyond that to predict outcome of specific treatment for individual patients. We will cover results from prostate and breast cancers to show that a future is not too far where algorithms will become a necessary set of tools in a pathologist’s toolbox.”
“Nanomagnetic devices may allow memory and logic functions to be combined in novel ways. And newer, perhaps more promising device concepts continue to emerge. At the same time, research in new architectures has also grown. Indeed, at the leading edge, researchers are beginning to focus on co-optimization of new devices and new architectures. Despite the growing research investment, the landscape of promising research opportunities outside the “FET devices and circuits box” is still largely unexplored.”
Cheyenne is a new 5.34-petaflops, high-performance computer built for NCAR by SGI. Cheyenne be a critical tool for researchers across the country studying climate change, severe weather, geomagnetic storms, seismic activity, air quality, wildfires, and other important geoscience topics. In this video, Brian Vanderwende from UCAR describes typical workflows in the NCAR/CISL Cheyenne HPC environment as well as performance […]
In this silent video from the Blue Brain Project at SC16, 865 segments from a rodent brain are simulated with isosurfaces generated from Allen Brain Atlas image stacks. For this INCITE project, researchers from École Polytechnique Fédérale de Lausanne will use the Mira supercomputer at Argonne to advance the understanding of these fundamental mechanisms of the brain’s neocortex.
Matthias Troyer frin ETH Zurich presented this talk at a recent Microsoft Research event. “Given limitations to the scaling for simulating the full Coulomb Hamiltonian on quantum computers, a hybrid approach – deriving effective models from density functional theory codes and solving these effective models by quantum computers seem to be a promising way to proceed for calculating the electronic structure of correlated materials on a quantum computer.”
“Guided by the principles of interactive supercomputing, Lincoln Laboratory was responsible for a lot of the early work on machine learning and neural networks. We now have a world-class group investigating speech and video processing as well as machine language topics including theoretical foundations, algorithms and applications. In the process, we are changing the way we go about computing. Over the years we have tended to assign a specific systems to service a discrete market, audience or project. But today those once highly specialized systems are becoming increasingly heterogeneous. Users are interacting with computational resources that exhibit a high degree of autonomy. The system, not the user, decides on the computer hardware and software that will be used for the job.”
In this video, Rich Brueckner from insideHPC moderates a panel discussion on Code Modernization. “SC15 luminary panelists reflect on collaboration with Intel and how building on hardware and software standards facilitates performance on parallel platforms with greater ease and productivity. By sharing their experiences modernizing code we hope to shed light on what you might see from modernizing your own code.”