In this special guest feature, Ferhat Hatay from Fujitsu writes that supercomputing technologies developed for data-intensive scientific computing can be a powerful tool for taking on the challenges of Big Data. We all feel it, data use and growth is explosive. Individuals and businesses are consuming — and generating — more data every day. The […]
“Moore’s Law got deflected in 2004, when it became no longer practical to ramp up the clock speed of CPUs to improve performance. So the chip industry improved CPU performance by adding more processors to a chip in concert with miniaturization. This was extra power, but you could not leverage it easily without building parallel software. Virtual machines could use multicore chips for server consolidation of light workloads, but large workloads needed parallel architectures to exploit the power. So, the software industry and the hardware industry moved towards exploiting parallelism in ways they had not previously done. This is the motive force behind the Big Data.”
“The failure of one parallel language — even a high-profile, well-funded, government-backed one — does not dictate the failure of all future attempts any more than early failures in flight or space travel implied that those milestones were impossible. As I’ve written elsewhere, I believe that there are a multitude of reasons that HPF failed to be broadly adopted. In designing Chapel, we’ve forged our strategy with those factors in mind, along with lessons learned from other successful and unsuccessful languages. Past failures are not a reason to give up; rather, they provide us with a wealth of experience to learn from and improve upon.”
The SC14 communications team interviewed Trish to get her perspective on this year’s Supercomputing Conference. “There is a wealth of knowledge from experienced SC volunteers who continue to provide their expertise year after year. The volunteers of SC are the heart of creating the thriving conference each year. My job is to keep them motivated and moving in the same direction.”
In this podcast, the Radio Free HPC team discusses the concept of offloading computation to networked devices such a storage controllers. During a recent Analyst Call with Dan Olds, Mellanox described the potential of network-attached storage processors. Henry wants to more about the interface to such and environment before he renders an opinion, while Rich notes that the companies like Solarflare have been doing this at the NIC level for level for several years now.
“Over the years I have chaired many parts of the Technical Program, but never had a chance to chair the whole Technical Program. SC plays an important role in the high-performance community. It is through the SC Conference that HPC practitioners get an overview of the field, get to showcase our important work, and network with the community.”
In this Industry Perspective, insideHPC editor Rich Brueckner asks our readers an important question: What Would You Do with an Exaflop? ” I went looking for such exascale use cases this morning, and I found this remarkable story in Harvard Topics Magazine about how an exascale system could predict heart attacks and artery blockage.”