Sign up for our newsletter and get the latest HPC news and analysis.

New Bright Cluster Manager for HPC, OpenStack, and Hadoop Clusters

bright-cluster-manager-standard

Today Bright Computing announced plans to roll out a new line of products designed to manage HPC clusters, Apache Hadoop clusters, and OpenStack private clouds.

10th Annual OpenFabrics Workshop Brings Together HPC Interconnect Community

imgres

“Over the years the OpenFabrics Alliance has developed a record of technology leadership, and you only get that kind of leadership thinking through the sort of face-to-face collaboration that occurs at the annual workshops. These workshops are where I/O technologists from all branches of the computing world can gather in an environment that fosters creative solutions to problems such as those posed by emerging Exascale computing. The outcome of these workshops ends up being a key driver in focusing the OFA’s efforts in continuing to push I/O technology forward.”

Managing Hyperscale Environments with HP Insight CMU

cellmaster

Bill Cellmaster from HP presented this talk in the Adaptive Computing booth at SC13. “HP Insight Cluster Management Utility (HP Insight CMU) is an efficient and robust hyperscale cluster lifecycle management framework and suite of tools for large Linux clusters such as those found in High Performance Computing (HPC) environments.”

Interview: Terascala and High Performance Data Movement

0623-Mover-butler

“Terascala’s intelligent operating system, TeraOS, simplifies managing Lustre®-based storage and optimizes workflows, providing the high throughput storage HPC users need to solve bigger problems faster. For the HPC folks, this means that Terascala-powered storage appliances can reduce run times to hours instead of days or weeks.”

How Big Workflow Optimizes Analysis, Throughput, and Productivity

rob

In this video, Adaptive Computing CEO Rob Clyde discusses the converging worlds of HPC, Big Data, and Cloud. “Big Workflow is an industry term coined by Adaptive Computing that accelerates insights by more efficiently processing intense simulations and big data analysis. Adaptive Computing’s Big Workflow solution derives it name from its ability to solve big data challenges by streamlining the workflow to deliver valuable insights from massive quantities of data across multiple platforms, environments and locations.”

Interview: NSU to Boost Research with “Megalodon” IBM Supercomputer

ibm_guys

“The team at the Graduate School of Computer and Information Sciences wanted to give the computer a nickname – a name that would not only convey the super computer’s enormity in terms of size; but a name that would definitively link the super computer to NSU. Our mascot is the “shark” and the Megalodon, is the largest prehistoric shark known to man.”

Interview: Steve Conway on the Upcoming HPC User Forum–April 7-9

Conway_95x98px

“The main topics for our April 7-9 meeting in Santa Fe are industrial partnerships with large HPC centers and how they’re working, with perspectives from the U.S., France and the UK. We’ll also take another hard look at what’s happening with processors, coprocessors and accelerators and at potential disruptive technologies, as well as zeroing in on the HPC storage market and trends and the CORAL procurement that involves Oak Ridge, Argonne and Livermore.”

Interview: Argonne Announces Training Program on Extreme-Scale Computing

Paul Messina, Director of Science at Argonne

“Systems like Argonne’s Mira, an IBM Blue Gene/Q system with nearly a million cores, can enable breakthroughs in science, but to use them productively requires expertise in computer architectures, parallel programming, mathematical software, data management and analysis, performance analysis tools, software engineering, and so on. Our training program exposes the participants to all those topics and provides hands-on exercises for experimenting with most of them.”

Slidecast: How Big Workflow Delivers Business Intelligence

rob_clyde

In this slidecast, Rob Clyde from Adaptive Computing describes Big Workflow — the convergence of Cloud, Big Data, and HPC in enterprise computing. “The explosion of big data, coupled with the collisions of HPC and cloud, is driving the evolution of big data analytics,” said Rob Clyde, CEO of Adaptive Computing. “A Big Workflow approach to big data not only delivers business intelligence more rapidly, accurately and cost effectively, but also provides a distinct competitive advantage.”

Supporting Multiple Researchers Using Moab

naval

In this video from the Adaptive Computing booth at SC13, Brian Andrus from the Naval Postgraduate School presents: Supporting Multiple Researchers Using Moab. To support professors, students, and collaborators, the school’s HPC environment includes a diverse set of x86 processors, accelerators, compilers, and more.