MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Interview: Jeff Bonwick on the Secret Sauce behind DSSD

We caught up Jeff Bonwick from DSSD to learn more about their exciting storage technology for HPC. “Our mission statement was four words: fastest storage on earth. That was our singular goal from day one, which gave the team incredible focus and clarity. Whenever we had to make a tradeoff between performance and something else, performance always won. Always. And it just so happens that when you aim for performance, density comes along for the ride because the more flash chips you have working in parallel, the faster it goes.”

Interview: Moving Beyond POSIX with the new MarFS Object Storage Project

“We wanted to get away from the complexity of POSIX for data, yet retain the parts of POSIX that people are used to (metadata manipulation). By divorcing ourselves from the complications of ensuring a completely POSIX data flow, we can massively simplify the data movement and storage mechanisms. MarFS lets us retain the parts of POSIX that users appreciate for data management (chown, chmod, rename, mv, etc) without inheriting the complexity of managing POSIX semantics for data manipulation. By treating the data as essentially immutable, we can leverage the very simple PUT/GET/DELETE semantics of “cloudy” data storage systems to scale out storage with ease.”

Why the HPC Industry will Converge on Europe at ISC 2016

In this special guest feature from Scientific Computing World, ISC’s Nages Sieslack highlights a convergence of technologies around HPC, a focus of the ISC High Performance conference, which takes place June 19-23 in Frankfurt. “In addition to the theme of convergent HPC technologies, this year’s conference will also offer two days of sessions in the industry track, specially designed to meet the interests of commercial users. Our focus is Industrie 4.0, a German strategic initiative conceived to take a leading role in pioneering industrial IT, which is currently revolutionizing engineering in the manufacturing sector.”

Slidecast: Advantages of Offloading Architectures for HPC

In this slidecast, Gilad Shainer from Mellanox describes the advantages of InfiniBand and the company’s off-loading network architecture for HPC. “The path to Exascale computing is clearly paved with Co-Design architecture. By using a Co-Design approach, the network infrastructure becomes more intelligent, which reduces the overhead on the CPU and streamlines the process of passing data throughout the network. A smart network is the only way that HPC data centers can deal with the massive demands to scale, to deliver constant performance improvements, and to handle exponential data growth.”

Radio Free HPC Recaps the GPU Technology Conference

In this podcast, the Radio Free HPC team recaps the GPU Technology Conference, which wrapped up last week in San Jose.
Since Rich is traveling around in some desert somewhere, Dan and Henry go it alone and discuss the new Pascal (P1000) GPU, NVIDIA’s new server, and what happened at the concurrent OpenPOWER conference.”

HPC Market Update and IDC’s Top Growth Areas for 2016 and Beyond

In this video from the HPC User Forum in Tucson, Earl Joseph from IDC presents: 2016 IDC HPC Market Update. “The HPC User Forum was established in 1999 to promote the health of the global HPC industry and address issues of common concern to users. The organization has since grown to 150 members.”

InsideHPC Guide to Technical Computing

Today’s High Performance Computing (HPC) systems offer the ability to model everything from proteins to galaxies. The insights and discoveries offered by these systems are nothing short of astounding. Indeed, the ability to process, move, and store data at unprecedented levels, often reducing jobs from weeks to hours, continues to move science and technology forward at an accelerating pace. This article series offers those considering HPC, both users and managers, guidance when considering the best way to deploy an HPC solution.

Live Report from LUG 2016 Day 3

In this special guest feature, Ken Strandberg offers this live report from Day 3 of the Lustre User Group meeting in Portland. “Rick Wagner from San Diego Supercomputing Center presented progress on his team’s replication tool that allows copying large blocks of storage from object storage to their disaster recovery durable storage system. Because rsync is not a tool for moving massive amounts of data, SDSC created recursive worker services running in parallel to have each worker handle a directory or group of files. The tool uses available Lustre clients, a RabbitMQ server, Celery scripts, and bash scripts.”

Video: HPC and Hyperscale Trends for 2016

Addison Snell from Intersect360 Research presented this talk at the Switzerland HPC Conference. “Based on updated research studies, Addison Snell of Intersect360 Research will present on forward-looking topics for HPC and Hyperscale markets. With an expanding look at hyper- scale, Intersect360 Research will describe the size and influence of the market, including evolving standards like Open Compute Project, OpenStack, and Beiji/Scorpio. Intersect360 Research has also investigated users’ plans for evaluating competing processing and interconnect options, including Xeon, Xeon Phi, GPU, FPGA, POWER, ARM, InfiniBand, and OmniPath.”

Reducing the Time to Science with Efficient Clouds

In this special guest feature from Scientific Computing World, Dr Bruno Silva from The Francis Crick Institute in London writes that new cloud technologies will make the cloud even more important to scientific computing. “The emergence of public cloud and the ability to cloud-burst is actually the real game-changer. Because of its ‘infinite’ amount of resources (effectively always under-utilized), it allows for a clear decoupling of time-to-science from efficiency. One can be somewhat less efficient in a controlled fashion (higher cost, slightly more waste) to minimize time-to-science when required (in burst, so to speak) by effectively growing the computing estate available beyond the fixed footprint of local infrastructure – this is often referred to as the hybrid cloud model. You get both the benefit of efficient infrastructure use, and the ability to go beyond that when strictly required.”