The National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign is helping change the way genetic medicine is researched and practiced in Africa. Members of the Blue Waters team recently made it possible to discover genomic variants in over 300 deeply sequenced human samples to help construct a genotyping chip specific for […]
The NCSA Blue Waters project is offering a Workflows Workshop virtual course in August. To share this class with as many students as possible, they are seeking universities willing to be a local site and offer the course to their students.
Today the Numerical Algorithms Group (NAG) has announced the NAG Software Modernization Service. The new service solves the porting and performance challenges faced by customers wishing to use the capabilities of modern computing systems, such as multi-core CPUs, GPUs and Xeon Phi. NAG HPC software engineering experts modernize the code to enable portability to appropriate architectures, optimize for performance and assure robustness.
The Pittsburgh Supercomputing Center (PSC) celebrated its 30th anniversary last week. “The beginning of PSC’s fourth decade will see the center with two new supercomputers—the NSF-funded Bridges system, already operational and due for completion this fall, and an Anton 2 molecular dynamics simulation system, provided at no charge by D. E. Shaw Research and with operational funding from the National Institutes of Health to be hosted at PSC also beginning in the Fall.”
While all users of HPC technology want the fastest performance available, price and power consumption always seem to come into play, whether in the initial planning or at a later time. Standard performance measures exist that may or may not relate to an end user’s application mix, but it is important to understand the various benchmark results that go into determining the performance of a CPU, a server or an overall cluster.
SC16 has announced the winner of their Test of Time Award. This year the winning paper “Automatically Tuned Linear Algebra Software” by Clint Whaley and Jack Dongarra. The paper, which has received hundreds of citations with new citations still appearing, is about ATLAS – an autotuning, optimized implementation of the Basic Linear Algebra Subprograms (BLAS).
OCF in the UK reports that the company continues to expand its operations. The high performance computing integrator is recruiting a number of new staff to meet the growing appetite and demand for HPC and data analytics solutions across universities, research institutes and commercial businesses in the UK.
Lawrence Livermore National Lab is seeking an Associate Director for Computation in our Job of the Week. LLNL seeks to fill the position of Associate Director (AD) for Computation, a position key to the continued success of LLNL’s world-premier high performance computing, computer science, and data science enterprise. The AD for Computation is responsible for […]
Argonne Distinguished Fellow Paul Messina has been tapped to lead the Exascale Computing Project, heading a team with representation from the six major participating DOE national laboratories: Argonne, Los Alamos, Lawrence Berkeley, Lawrence Livermore, Oak Ridge and Sandia. The project will focus its efforts on four areas: Applications, Software, Hardware, and Exascale Systems.
Intel is offering a 4-part summer series of developer training workshops at Stanford University to introduce high performance computing tools.