The two laws of parallel performance quantify strong versus weak scalability and illustrate the balancing act that is parallel optimization.
Archives for October 2013
TACC Powers Surgical Simulations
Researchers are using TACC supercomputers for surgical simulations that could save lives.
ESG Looks at Spectra’s Black Pearl for Deep Storage
In this video, ESG Senior Analysts Mark Peters and Jason Buffington give their impressions of the recent Spectra Logic Summit and the company’s BlackPearl storage product. BlackPearl enables the use of tape to easily store massive volumes of data objects and is ideal for any compute environment that needs to store data for long periods of time.
Notre Dame Scales HPC with Allinea MAP
The Center For Research Computing at the University of Notre Dame is using Allinea Software’s tools as part of their mission to improve code efficiency and open their HPC resources to a wider user base.
AWS Powers Largest Genomics Analysis Cluster in the World
Working with DNAnexus and Amazon Web Services, we were able to rapidly deploy a cloud-based solution that allows us to scale up our support to researchers at the HGSC, and make our Mercury pipeline analysis data accessible to the CHARGE Consortium, enabling what will be the largest genomic analysis project to have ever taken place in the cloud.
IU Center Scores Funds For Supercomputing
This week the Indiana University Center for Research in Extreme Scale Technologies has secured a pair of NSF grants worth more than $400,000. The first NSF grant totals nearly $200,000 and provides funding to develop an online HPC course to improve America’s reach in the frontiers of exascale computing. There is a dearth of experts […]
A Look into the Quantum Artificial Intelligence Lab
We believe quantum computing may help solve some of the most challenging computer science problems, particularly in machine learning. Machine learning is all about building better models of the world to make more accurate predictions.
Adopting Parallelism… is Mandatory
Through the new Intel Parallel Computing Centers, the company hopes to accelerate the creation of open standard, portable, scalable, parallel applications by combining computational science, hardware, programmer tools, compilers, and libraries, with domain knowledge and expertise.
George Papen Presents: Hybrid Datacenter Networks
Hybrid datacenter networks can selectively route packets over either an electrical packet-switched network or an optical circuit-switched network. This kind of network is attractive for scale-out datacenters because of its energy and scaling properties. However, the control plane for this network must precisely synchronize the two underlying networks.
Seeking Nominations: Brill Awards for Efficient IT
The Uptime Institute has announced a new awards program, the Brill Awards for Efficient IT, continuing the late Mr. Brill’s vision of sharing best practices and new ideas to improve data center and IT efficiency.