Archives for February 2016

Interview: Victor Eijkhout on an Integrative Model for Parallelism

The Integrative Model for Parallelism at TACC is a new development in parallel programming. It allows for high level expression of parallel algorithms, giving efficient execution in multiple parallelism modes. We caught up with its creator, Victor Eijkhout, to learn more. “If you realize that both task dependencies and messages are really the dependency arcs in a dataflow formulation, you now have an intermediate representation, automatically derived, that can be interpreted in multiple parallelism modes.”

BeeGFS Parallel File System Goes Open Source

Today ThinkParQ announced that the complete BeeGFS parallel file system is now available as open source. Developed specifically for performance-critical environments, the BeeGFS parallel file system was developed with a strong focus on easy installation and high flexibility, including converged setups where storage servers are also used for compute jobs. By increasing the number of servers and disks in the system, performance and capacity of the file system can simply be scaled out to the desired level, seamlessly from small clusters up to enterprise-class systems with thousands of nodes.

Obama Proposes to Increase NIST Budget

The US Department of Commerce has released details of the President’s budget request for the National Institute of Standards and Technology (NIST) in 2017 – proposing to increase spending on HPC and future computing technologies by more than 50 per cent. The total discretionary request for NIST is $1 billion, a $50.5 million increase in the enacted amount from 2016. The funding supports NIST’s research in areas such as computing, advanced communications and manufacturing.

Video: Introduction to the Euroserver for Green Computing

“EUROSERVER is an ambitious and holistic project aimed to arm Europe with leading technology for conquering the new markets of cloud computing. Data-centers form the central brains and store for the Information Society and are a key resource for innovation and leadership. The key challenge has recently moved from just delivering the required performance, to include consuming reduced energy and lowering cost of ownership. Together, these create an inflection point that provides a big opportunity for Europe, which holds a leading position in energy efficient computing and market prominent positions in embedded systems.”

Florida Atlantic University Selects Bright Cluster Manager for HPC

Today Florida Atlantic University (FAU) announced that it is using Bright Cluster Manager software for its HPC cluster. The 56-node cluster is used for teaching Hadoop Map Reduce, bioinformatics research and other modeling and visualization work. Administrators say Bright Cluster Manager has significantly increased automation and is easily scalable to meet expected future growth.

Radio Free HPC Looks at the Case of the Encrypted iPhone

In this podcast, the Radio Free HPC team looks at Apple’s fight against a court order to decrypt an iPhone used by one of the San Bernardino shooters. “The Radio Free HPC is split as to what should happen next, but it seems likely that some kind of compromise will result. Is the government entitled to a back door to all devices? It would seem that no one wants such an important policy to be decided from a single case in California.”

FORTISSIMO 2 Call for Proposals: HPC-Cloud-based Application Experiments

Today, the Fortissimo 2 consortium of 38 partners has announced the launch of its first Open Call for Proposals. The project is funding a set of experiments (sub-projects) to extend and demonstrate the business potential of an ecosystem for HPC-Cloud services, specifically for applications involving the simulation of coupled physical processes or high-performance data analytics. Additional application experiments are sought to complement and to extend current project activities; these new experiments must be driven by the business needs of engineering and manufacturing SMEs and mid-caps.

Intersect360 Research to Discuss Hyperscale Trends at Stanford HPC Conference

There is still significant influence between HPC and hyperscale, in both directions, most notably in the areas of cognitive computing and artificial intelligence, where research at some of the top hyperscale companies leads the field. Standards like Open Compute Project, OpenStack, and Beiji/Scorpio also can drive acquisition decisions at traditional HPC-using organizations. Big data and analytics also transcend both HPC and hyperscale, driving I/O scalability in both markets. These trends are all included in the new hyperscale advisory service from Intersect360 Research.

Supercomputing the Mystery of Old Faithful

Over at Science Node, Lance Farrell writes that researchers are using XSEDE supercomputer resources to solve the mysteries of the Old Faithful geyser. “If you look at the distribution of supervolcanoes globally, you’ll find something very interesting. You will see that most if not all of them are sitting close to a subduction zone,” Liu observes. “This close vicinity made me wonder if there were any internal relations between them, and I thought it was necessary and intriguing to further investigate this.”

Job of the Week: Principal HPC Enterprise Architect at Leidos

The Surveillance and Reconnaissance Group of Leidos is seeking a talented Principal HPC Enterprise Architect in our Job of the Week. “Successful Candidate will serves as the lead HPC technologist and subject matter expert (SME) for the HPCMP at the Army Research Lab (ARL) DoD Supercomputing Resource Center (DSRC) in Aberdeen MD.”