“Because the silverfly species are identical to look at, the best way to distinguish them is by examining their genetic difference, so we are deploying a mix of genomics, supercomputing, and evolutionary history. This knowledge will help African farmers and scientists distinguish between the harmless and the invasive ones, develop management strategies, and breed new whitefly-resistant strains of cassava. The computational challenge for our team is in processing the genomic data the sequencing machines produce.”
Manufacturing is enjoying an economic and technological resurgence with the help of high performance computing. In this insideHPC webinar, you’ll learn how the power of CAE and simulation is transforming the industry with faster time to solution, better quality, and reduced costs.
Sugon is one of the top HPC vendors in China. With plans to expand operations in the West, the company is once again sponsoring the ISC 2016 conference. “Sugon, used to be called Dawning, rooted from the Institute of Computing Technology of the Chinese Academy of Sciences (ICT), and was the first (and now largest) HPC local vendor in China. Since 1990, Sugon has been working on High Performance Computing, producing seven generations of HPC systems, such as Dawning I and Dawning 1000 to 6000. We have successfully supported more than 10,000 HPC projects.”
In this podcast, the Radio Free HPC team looks at the Top Technology Stories for High Performance Computing in 2015. “From 3D XPoint memory to Co-Design Architecture and NVM Express, these new approaches are poised to have a significant impact on supercomputing in the near future.” We also take a look at the most-shared stories from 2015.
In this week’s industry Perspective, Katie Garrison of One Stop Systems explains how GPUltima allows HPC professionals to create a highly dense compute platform that delivers a petaflop of performance at greatly reduced cost and space requirements.compute power needed to quickly process the amount of data generated in intensive applications.
The consensus of the panel was that making full use of Intel SSF requires system thinking at the highest level. This entails deep collaboration with the company’s application end-user customers as well as with its OEM partners, who have to design, build and support these systems at the customer site. Mark Seager commented: “For the high-end we’re going after density and (solving) the power problem to create very dense solutions that, in many cases, are water-cooled going forward. We are also asking how can we do a less dense design where cost is more of a driver.” In the latter case, lower end solutions can relinquish some scalability features while still retaining application efficiency.
Although liquid cooling is considered by many to be the future for data centers, the fact remains that there are some who do not yet need to make a full transformation to liquid cooling. Others are restricted until the next budget cycle. Whatever the reason, new technologies like Internal Loop are more affordable than liquid cooling and can replaces less efficient air coolers. This enables HPC data centers to still utilize the highest performing CPUs and GPUs.
“Scientific research is dependent on maintaining and advancing a wide variety of software. However, software development, production, and maintenance are people-intensive; software lifetimes are long compared to hardware; and the value of software is often underappreciated. Because software is not a one-time effort, it must be sustained, meaning that it must be continually updated to work in environments that are changing and to solve changing problems. Software that is not maintained will either simply stop working, or will stop being useful.”
“The path to Exascale computing is clearly paved with Co-Design architecture. By using a Co-Design approach, the network infrastructure becomes more intelligent, which reduces the overhead on the CPU and streamlines the process of passing data throughout the network. A smart network is the only way that HPC data centers can deal with the massive demands to scale, to deliver constant performance improvements, and to handle exponential data growth.”
Data accumulation is just one of the challenges facing today weather and climatology researchers and scientists. To understand and predict Earth’s weather and climate, they rely on increasingly complex computer models and simulations based on a constantly growing body of data from around the globe. “It turns out that in today’s HPC technology, the moving of data in and out of the processing units is more demanding in time than the computations performed. To be effective, systems working with weather forecasting and climate modeling require high memory bandwidth and fast interconnect across the system, as well as a robust parallel file system.”