Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


SC16 to Showcase Latest Advances in HPC

SC16 returns to Salt Lake City on Nov. 13-18. The Six-day supercomputing event features internationally-known expert speakers, cutting-edge workshops and sessions, a non-stop student competition, the world’s largest supercomputing exhibition,panel discussions and much more. “No other annual event showcases the revolutionary advances and possibilities of high performance computing than the annual ACM/IEEE International Conference for High Performance Computing, Networking, Data Storage Analysis. From the impact of HPC on the future of medicine, to its transformative power in developing countries and “smart cities.” SC is the premiere venue for presenting leading-edge HPC research.”

Seeking Innovators for the StartupHPC Workshop at SC16

In this special guest feature, Cydney Ewald Stevens writes that Salt Lake City will soon host the return of the SC conference along with the third annual StartupHPC Workshop. “People come together at StartupHPC to learn from each other,” said founder Shahin Khan. “These are all leaders in their own right. From successful CxO’ and serial entrepreneurs to industry influencers these leaders come together each year to impart their wisdoms and experiences, share their own ‘journeys’ and help others prosper as a result.”

ASML taps Livermore for Extreme UV Chip Manufacturing

Tomorrow’s top supercomputers will require chips built using advanced lithography far beyond today’s capabilities. Towards this end, Lawrence Livermore National Lab today announced a collaboration with ASML, a leading builder of chip-making machinery, to advance extreme ultraviolet (EUV) light sources toward the manufacturing of next-generation semiconductors.

Fujitsu Develops New Architecture for Combinatorial Optimization

Today Fujitsu Laboratories announced a collaboration with the University of Toronto to develop a new computing architecture to tackle a range of real-world issues by solving combinatorial optimization problems that involve finding the best combination of elements out of an enormous set of element combinations. “This architecture employs conventional semiconductor technology with flexible circuit configurations to allow it to handle a broader range of problems than current quantum computing can manage. In addition, multiple computation circuits can be run in parallel to perform the optimization computations, enabling scalability in terms of problem size and processing speed.”

Moving Beyond Moore’s Law with the New CRNCH Center at Georgia Tech

Georgia Tech is taking on the challenge of moving computing past the end of Moore’s Law by standing up a new interdisciplinary research center, which is known as CRNCH. “We knew that at some point physics would come into play. We hit that wall around 2005,” said Tom Conte, inaugural director of CRNCH and professor in Georgia Tech’s schools of Computer Science and Electrical and Computer Engineering.

HPC: Retrospect & Looking Towards the Next 10 Years

In this video from the HPC Advisory Council Spain Conference, Addison Snell from Intersect360 Research looks back over the past 10 years of HPC and provides predictions for the next 10 years. Intersect360 Research just released their Worldwide HPC 2015 Total Market Model and 2016–2020 Forecast.

Exascale – A Race to the Future of HPC

From Megaflops to Gigaflops to Teraflops to Petaflops and soon to be Exaflops, the march in HPC is always on and moving ahead. This whitepaper details some of the technical challenges that will need to be addressed in the coming years in order to get to exascale computing.

D-Wave Systems Previews 2000-Qubit Quantum Computer

Today D-Wave Systems announced details of its most advanced quantum computing system, featuring a new 2000-qubit processor. The announcement is being made at the company’s inaugural users group conference in Santa Fe, New Mexico. The new processor doubles the number of qubits over the previous generation D-Wave 2X system, enabling larger problems to be solved and extending D-Wave’s significant lead over all quantum computing competitors. The new system also introduces control features that allow users to tune the quantum computational process to solve problems faster and find more diverse solutions when they exist. In early tests these new features have yielded performance improvements of up to 1000 times over the D-Wave 2X system.

Co-design for Data Analytics And Machine Learning

The big data analytics market has seen rapid growth in recent years. Part of this trend includes the increased use of machine learning (Deep Learning) technologies. Indeed, machine learning speed has been drastically increased though the use of GPU accelerators. The issues facing the HPC market are similar to the analytics market — efficient use of the underlying hardware. A position paper from the third annual Big Data and Extreme Computing conference (2015) illustrates the power of co-design in the analytics market.

Network Co-design as a Gateway to Exascale

Achieving better scalability and performance at Exascale will require full data reach. Without this capability, onload architectures force all data to move to the CPU before allowing any analysis. The ability to analyze data everywhere means that every active component in the cluster will contribute to the computing capabilities and boost performance. In effect, the interconnect will become its own “CPU” and provide in-network computing capabilities.