“For more than 40 years now, we have enjoyed a proud and storied history with Chippewa Falls, and the opening of our new manufacturing facility affirms our commitment to building our supercomputers in a town that is synonymous with Cray,” said Peter Ungaro, president and CEO of Cray. “Maintaining direct control of our manufacturing process ensures our systems are built with the highest level of quality that customers expect in a Cray product. “
In this video from the 2015 OLCF User Meeting, Buddy Bland from Oak Ridge presents: Present and Future Leadership Computers at OLCF. “As the home of Titan, the fastest supercomputer in the USA, OLCF has an exciting future ahead with the 2017 deployment of the Summit supercomputer. Summit will deliver more than five times the computational performance of Titan’s 18,688 nodes, using only approximately 3,400 nodes when it arrives in 2017.”
Today GENCI announced a collaboration with IBM aimed at speeding up the path to exascale computing. “The collaboration, planned to run for at least 18 months, focuses on readying complex scientific applications for systems under development expected to achieve more than 100 petaflops, a solid step forward on the path to exascale. Working closely with supercomputing experts from IBM, GENCI will have access to some of the most advanced high performance computing technologies stemming from the rapidly expanding OpenPOWER ecosystem.”
Applications that use 3D Finite Difference (3DFD) calculations are numerically intensive and can be optimized quite heavily to take advantage of accelerators that are available in today’s systems. The performance of an implementation can and should be optimized using numerical stencils. Choices made when designing and implementing algorithms can affect the Arithmetic Intensity (AI), which is a measure of how efficient an implementation, by comparing the flops and memory access.
Today Intel Corporation and BlueData announced a broad strategic technology and business collaboration, as well as an additional equity investment in BlueData from Intel Capital. BlueData is a Silicon Valley startup that makes it easier for companies to install Big Data infrastructure, such as Apache Hadoop and Spark, in their own data centers or in the cloud.
Geert Wenes writes in the Cray Blog that the next generation of Grand Challenges will focus on critical workflows for Exascale. “For every historical HPC grand challenge application, there is now a critical dependency on a series of other processing and analysis steps, data movement and communications that goes well beyond the pre- and post-processing of yore. It is iterative, sometimes synchronous (in situ) and generally more on an equal footing with the “main” application.”
“Supercomputing should be available for everyone who wants it. With that mission in mind, a team of engineers created Parallella, an 18-core supercomputer that’s a little bigger than a credit card. Parallella is open source hardware; the circuit diagrams are on GitHub and the machine runs Linux. Icing on the cake: Parallella is the most energy efficient computer on the planet, and you can buy one for a hundred bucks. Why does parallel computing matter? How can developers use parallel computing to deliver better results for clients? Let’s explore these questions together.”
“Within the next 12 months, China expects to be operating not one but two 100 Petaflop computers, each containing (different) Chinese-made processors, and both coming on stream about a year before the United States’ 100 Petaflop machines being developed under the Coral initiative. Ironically, the CPU for one machine appears very similar to a technology abandoned by the USA in 2007, and the US Government, through its export embargo, has encouraged China to develop its own accelerator for the other machine.”
Today Intel released Intel Parallel Studio XE 2016, the next iteration of its developer toolkit for HPC and technical computing applications. This release introduces the Intel Data Analytics Acceleration Library, a library for big data developers that turns large data clusters into meaningful information with advanced analytics algorithms.