In this video from the Lustre User Group 2013, Giuseppe Bruno from the Bank of Italy presents: Performance & Functionality Testbed for Clustered Filesystems.
We’ve noticed a rise in the use of Energy-Efficient Computing, especially when it comes to HPC and Datacenter. A key factor in the future of large-scale HPC systems, energy efficiency is emerging as likely a second big obstacle to reaching exascale. The reason is the cost of powering an exascale system is exponentially higher than the current petascale systems, and power isn’t getting any cheaper. As an industry, HPC will have to weigh the benefits, or need, for exascale, versus the cost to house and power such systems. The good news is that many systems in Europe are already thinking green because of the higher energy costs. And, we’re seeing a stronger presence for Green Computing in the United States, with systems like NICS’ Beacon reaching the top of the Green500, a list that has picked up significant steam since its initial release in 2007.
The other buzzwords include Big Data, Exascale, Petascale Race, and HPC Cloud. Read the Full Story.
In this video from the Lustre User Group 2013, Makia Minich from Xyratex presents: Managing and Monitoring a Scalable Lustre Infrastructure. Download the slides (PDF) or check out our LUG 2013 Video Gallery.
During his talk, Makia mentions an excellent presentation from John West entitled What’s Missing from HPC.
D-Wave Systems, a commercial quantum computing company, has announced the formal launch of its US business.
Industry expert and supercomputing veteran, Robert “Bo” Ewald will lead the new business as president and will head up global customer operations as the company’s chief revenue officer. New offices and R&D facilities have opened in Palo Alto, California and others are expected in the near future.
Bo Ewald joining us is huge validation of our business,’ said Vern Brownell, CEO of D-Wave Systems. “Bo is a legendary figure in the supercomputing industry. His knowledge and influence reach a wide array of sectors, where he has delivered state-of-the-art high performance solutions for research, defence and intelligence, energy, manufacturing, financial services and genomics. Throughout Bo’s career he has been dedicated to helping organisations solve their most difficult challenges, which perfectly matches the mission of D-Wave. Today we launch our formal presence in the US and will start to expand our business globally. It is gratifying to have Bo at the helm.
Ewald added: “I’ve been in pioneering technology organisations for a long time with companies that did things that had never been done before and that allowed their customers to do the same. The quantum computers being developed by D-Wave and the applications that will be used by our customers will be an even more revolutionary step than I’ve seen in the industry. People will be able to solve problems that they can only dream about today, on systems that are turning science fiction into science fact.”
Over at HPC Admin, Dell’s Jeff Layton writes that with today’s explosive data growth, at some point you will have to migrate data from one set of storage devices to another. To help move things along, he provides an overview of data migration tools.
At some point during this growth spurt, you will have to think about migrating your data from an old storage solution to a new one, but copying the data over isn’t as easy as it sounds. You would like to preserve the attributes of the data during the migration, including xattrs (extended attributes), and losing information such as file ownership or timestamps can cause havoc with projects. Plus, you have to pay attention to the same things for directories; they are just as important as the file themselves (remember that everything is a file in Linux). In this article, I wanted to present some possible tools for helping with data migration, and I covered just a few of them. However, I also wanted to take a few paragraphs to emphasize that you need to plan your data migration if you want to succeed.
Read the Full Story.
In a special session at ISC’13, scientists working on the Human Brain Project will discuss their vision and roadmap for computing. Featuring Dr. Henry Markram of EPFL, the June 18 keynote will be entitled Supercomputing & the Human Brain Project – Following Brain Research & ICT on their 10-Year Quest.
The Human Brain Project, recently awarded a 10 year grant by the EU Commission, will pull together all our existing knowledge about the human brain and to reconstruct the brain, piece by piece, in supercomputer-based models and simulations. Federating more than 80 European and international research institutions, the Human Brain Project is estimated to cost 1.19 billion euros. It will be coordinated at the Ecole Polytechnique Fédérale de Lausanne (EPFL) in Switzerland, by neuroscientist Henry Markram with co-directors Karlheinz Meier of Heidelberg University, Germany, and Richard Frackowiak of Centre Hospitalier Universitaire Vaudois and the University of Lausanne. The project will also associate some important North American and Japanese partners.
Read the Full Story.
The ISC’13 conference takes place June 16-20 in Leipzig, Germany and discounted Early Registration ends May 15.
Over the Intel Datastack Blog, Winston Saunders writes considering the rapidly expanding efficiency and performance capability of supercomputing systems, it may be time to upgrade just for the electricity savings alone.
You can see system-level annualized energy costs in the Figure. From this point it is pretty straight forward to calculate a payback time for replacing inefficient servers. It’s interesting they work out to be vertical lines. It’s interesting that they times for return on investment show up as vertical lines. It’s astounding that they are so short. In several cases, less than a year!
Read the Full Story.
The Hydra60 is a combination Lustre OSS (object storage server) and OST (object storage target) with two active/active failover nodes and shared storage in a single system chassis with an ultra dense 60 drive 6Gb SAS storage infrastructure. With a unified and zonable 6Gb SAS dual-ported backplane and drives the Hydra60 can sustain a remarkable performance while providing high-availability to volumes or object storage. With external interface options including FDR Infiniband, 40/10GbE 1Gb Ethernet and supporting Linux and Lustre releases 2.x the Hydra60 makes an excellent storage platform for Lustre performance with HA operation. The design of Hydra60 provides an affordable, redundant and resilient storage platform by leveraging RAIDZ thereby eliminating the cost of hardware RAID controller technology.”
For more on Lustre, check out our LUG 2013 Video Gallery.
Japan News reports that the country’s science ministry is considering development of an exascale supercomputer that would be 100 times faster than K computer, which is currently the nation’s fastest machine. With a goal of completing the machine by about 2020, the Education, Culture, Sports, Science and Technology Ministry is preparing to request funding for conceptual designs and other areas in next fiscal year’s budget, the sources said.
Exascale computer projects are already under way in the United States, Europe and China, all aiming for completion around 2020. The working group decided to enter the fierce international race to develop an exascale supercomputer because “it would aid scientific and technological development, and help improve industrial competitiveness,” the sources said.
Read the Full Story.
Today ScaleMP announced that, together with its technology and channel partners, it will offer a competitive solution for any customer holding a SGI shared-memory UV quote. According to the company, their competitive solutions will provide 20 percent more memory, 20 percent greater performance at 20 percent lower price than an eligible SGI quote.
This limited-time offer provides customers with a single source for solutions based on vSMP Foundation software and the latest x86 hardware. With support for up to 256 TB of RAM and 32,768 CPUs, ScaleMP solutions power extreme shared-memory systems. ScaleMP solutions support scalable system backplane with over 500 Gbps (bidirectional) and allows for active-backplane redundancy, preventing interconnect failure from hurting system stability.
ScaleMP is very excited to be teaming with our technology partners to provide the industry with more affordable and better performing shared-memory systems. With the growing demand for shared-memory and large-memory applications, we are looking to increase the penetration of software-defined systems into the broader IT ecosystem,” said Shai Fultheim, CEO and founder of ScaleMP.
Solutions based on vSMP Foundation allow choice of hardware platform and are available based on the most recent Intel and AMD processors. vSMP Foundation will also support Intel’s upcoming Ivy Bridge processors at launch time.
Read the Full Story.
In this slidecast, Ken Claffey from Xyratex describes the company’s new ClusterStor 1500 storage system. Designed for scale-out HPC storage solutions, the ClusterStor 1500 delivers HPC performance and efficiency with help from the Lustre file system.
Departments within larger organizations or medium-sized enterprises today, especially in the commercial, academic and government sectors, represent an underserved market. They need high-performance and scalable storage solutions that are cost-efficient, easy to deploy and manage and reliable even under heavy workloads,” said Ken Claffey, senior vice president of the ClusterStor business at Xyratex. “Growth in this market segment is being driven by the increasing adoption of simulation applications in a wide range of industries from car and aircraft design to chemical interactions and financial modeling. Traditional enterprise storage systems are simply not designed to meet the performance needs of these applications, so we engineered and built the affordable and modular ClusterStor 1500 to bring the performance power of Lustre to this underserved and growing market in the way that only ClusterStor can.”
With the ability to scale performance from 1.25GB/s to 110GB/s and raw capacity from 42TB to 7.3PB, ClusterStor 1500 is purpose-built to satisfy data intensive department level compute cluster needs, ClusterStor 1500 is designed to provide best in class scale-out storage for middle tier high performance computing environments. The ClusterStor 1500 solution features scale-out storage building blocks, the Lustre parallel filesystem and a comprehensive management platform that eliminates the guesswork usually associated with building and optimizing your own HPC storage solution.
A high-performance server cluster is enabling researchers at the Institute for Computational Cosmology (ICC), based at Durham University and throughout the wider UK astrophysics community, to better understand the universe by allowing them to model phenomena ranging from solar flares to the formation of galaxies.
The cluster is part of the DiRAC (Distributed Research using Advanced Computing) national facility. As such, members of the UKMHD consortium, ICC members and their national and international collaborators also use the cluster. In total, the cluster is used by researchers at universities in the UK including Leeds, Liverpool, Manchester, St Andrews, Sussex and Warwick, and from abroad by people in Australia, China, Germany and the Netherlands.
The cluster is known as The Cosmology Machine (Cosma) and is a combination of Cosma5, a new IBM and DDN technology infrastructure integrated with Durham University’s existing cluster, Cosma4 (originally installed in January 2011).
Boosted by new infrastructure, Cosma now has 9,856 CPU cores and 4,096 GPU cores. It includes 71,000 Gigabytes (GB) of RAM and the peak performance of the system is 182T/Flops. Cosma has 3.5 petabytes of storage for the data produced by cosmology applications.
The server cluster and storage has been designed, built, installed and will be supported by Durham University’s data processing, data management and storage partner, OCF.
The University of Minnesota is seeking an Assistant Director for Research Cyberinfrastructure in our Job of the Week.
We seek a senior level candidate to lead our Research Cyberinfrastructure efforts, with particular experience and expertise in data intensive research. The ideal candidate would bring sound knowledge and demonstrable experience in High Performance Computing (HPC) hardware platforms, scientific software environments, and disciplinary research in a data intensive field. His/her central role will be to oversee the development of research infrastructure in support of data intensive activities within the University, including the deployment and operation of platforms and tools to support data intensive scientific research in an academic setting.
Are you paying too much for your job ads? Not only do we offer ads for a fraction of what the other guys charge, our insideHPC Job Board is powered by SimplyHIred, the world’s largest job search engine.
As a reminder, we are offering FREE job listings for .EDU and .GOV domains, so email us at: info @ insideHPC.com for a special discount code.
While what we think of as traditional HPC may differ greatly from Big Data analytics, that seems to be changing. With a long history in high performance computing and customers in both worlds, Ferstl shares his unique perspective on where the two worlds overlap and where the potential is greatest for synergy in the future.
This has to be our best show yet, so be sure to check it out.