“One of the most recurrent themes is that of open-source vs. proprietary code. This debate is often painted with the idealistic open-source evangelists on one side, and the business-focused proprietary software advocates on the other. This is, of course, an unfair depiction of the topic. In reality, when debating open-source vs. proprietary, several issues tend to get conflated into one argument – open-source vs. closed-source, free vs. paid-for, restrictive vs flexible licensing, supported vs. unsupported, code quality, and so on.”
In this video, ITIF hosts a hearing on the The Vital Importance of High-Performance Computing to U.S. Competitiveness and National Security. Their recently published report urges U.S. policymakers to take decisive steps to ensure the United States continues to be a world leader in high-performance computing.
Getting started with HPC can be a challenge for SMEs, but managing a cluster doesn’t have to be a struggle. IBM’s Platform Computing group has been helping users to stand up and run clusters efficiently for years. Now, with the recently announced IBM Platform LSF Suites for Workgroups and HPC, the company has made it easier than ever to get kick the tires on High Performance Computing. “So basically, we would give you all the tools that would allow you to easily migrate from a loose collection of work stations to a small cluster environment. And we would handle the bare metal provisioning and then installing the software that you need really to manage your workload.”
In this special guest feature, Robert Roe from Scientific Computing World describes why Nvidia is in the driver’s seat for Deep Learning. “Nvidia CEO Jen-Hsun Huang’s theme for the opening keynote was based on “a new computing model.” Huang explained that Nvidia builds computing technologies for the most demanding computer users in the world and that the most demanding applications require GPU acceleration. ‘The computers you need aren’t run of the mill. You need supercharged computing, GPU accelerated computing’ said Huang.”
Dr. Marc Snir discusses why Argonne is participating in the OpenHPC Community. “OpenHPC can be a good mechanism to make sure all the pieces of open source software in HPC fit well together. It’s an important initiative that can bring together the HPC open source software community. It can make sure that a full stack of HPC software is available in a useful manner to the user community.”
In this special guest feature from Scientific Computing World, Shailesh M Shenoy from the Albert Einstein College of Medicine in New York discusses the challenges faced by large medical research organizations in the face of ever-growing volumes of data. “In short, our challenge was that we needed the ability to collaborate within the institution and with colleagues at other institutes – we needed to maintain that fluid conversation that involves data, not just the hypotheses and methods.”
In this podcast, the Radio Free HPC team recaps the ASC16 Student Cluster Competition in China and the 2016 MSST Conference in Santa Clara. Dan spent a week in Wuxi interviewing ASC16 student teams, he came back impressed with the Linpack benchmark tricks from the team at Zhejiang University, who set a new student LINPACK record with 12.03 TFlop/s. Meanwhile, Rich was in Santa Clara for the MSST conference, where he captured two days of talks on Mass Storage Technologies.
IBM has introduced a new way for organizations of all sizes to buy and acquire a full HPC management suite including a community edition for those just starting out. The new IBM Platform LSF Suites are packages that include more than IBM Platform LSF, they provide additional functionalities designed to simplify HPC for users, administrators and the IT organization.
Altair is making a big investment toward uniting the whole HPC community to accelerate the state of the art (and the state of actual production operations) for HPC scheduling. Altair is joining the OpenHPC project with PBS Pro. They are focused on longevity – creating a viable, sustainable community to focus on job scheduling software that can truly bridge the gap in the HPC world.
In this special guest feature from Scientific Computing World, Darren Watkins from Virtus Data Centres explains the importance of building a data centre from the ground up to support the requirements of HPC users – while maximizing productivity, efficiency and energy usage. “The reality for many IT users is they want to run analytics that –with the growth of data – have become too complex and time critical for normal enterprise servers to handle efficiently.”