Archives for April 2015

Deploying Hadoop on Lustre Storage: Lessons Learned and Best Practices

In this video from LUG 2015 in Denver, J.Mario Gallegos from Dell presents: Deploying Hadoop on Lustre Storage: Lessons Learned and Best Practices. “Merging of strengths of both technologies to solve big data problems permits harvesting the power of HPC clusters on very fast storage.”

PSC Brings Big Bandwidth to Eight Pennsylvania Schools

The Pittsburgh Supercomputing Center is using Comcast Business Ethernet for secure, private network connections to eight associated colleges.

Job of the Week: HPC User Services Analyst at LSU

Louisiana State University is seeking an HPC User Services Analyst in our Job of the Week.

SGI Powers Earthquake Research in Japan

Today SGI announced that the Earthquake and Volcano Information Center of the Earthquake Research Institute (ERI) at the University of Tokyo, has deployed a large-scale parallel computing solution from SGI for leading-edge seismological and volcanological research.

Accelerating the Piz Daint Supercomputer with Allinea

Today Allinea Software released details on partnership that is helping scientists in research and industry to exploit Piz Daint – Europe’s most powerful supercomputer.

PRACE Celebrates Fifth Anniversary

Today the PRACE Partnership for Advanced Computing in Europe celebrated its fifth anniversary. In the past five years, PRACE has come a long way, growing from a project-based consortium into a fully-fledged international association of 25 countries.

Univa Rolls Out Universal Resource Broker

Today Univa announced the Universal Resource Broker, an enterprise-class workload optimization solution for high performance, containerized and shared data centers.

Dell’s GDAP Delivers an Integrated Genomic Processing Infrastructure

Dell has teamed with Intel to create innovative solutions that can accelerate the research, diagnosis and treatment of diseases through personalized medicine. The combination of leading-edge CPUs from Intel and the systems and storage expertise from Dell create a state-of-the-art solution that is easy to install, manage and expand as required.

Interview: AutoTune – Automated Optimization and Tuning

The main goal of AutoTune is the automatic optimization of applications in the area of HPC, targeting both performance optimization and energy efficiency. In this interview, Michael Gerndt from the Technische Universitaet Muenchen tells us more about the project.

Video: Current Status of ZFS as Backend File System for Lustre

“Intel supports users, system integrators, and OEMs using ZFS with Intel Lustre. In this presentation, we summarize the results of proof-of-concept (PoC) on a variety of the ZFS configurations. We cover sequential and metadata performance, data Integrity, manageability, availability and reliability. The work identifies the areas where development should be focused in order to fill gap in performance or functionality and encourage system administrator to integrate this technology with the existing high availability framework like Pacemaker/Corosync. We also cover the most important tunables for ZFS in combination with Lustre and the most notable metrics for Lustre and ZFS.”