The OFA User Workshop, April 18-19, provides opportunities to share experiences and learn from a community of OFS users.
The International Developer’s Workshop, April 21-24, will focus on the development and improvement of OFS as well as major developments in RDMA, etc. Agenda and more information is available on OpenFabrics.org.
Registration for the two events is now open. More details are available in this month’s OFA Newsletter, which features an interview with Susan Coulter, HPC Network Administrator at Los Alamos National Laboratory.
One of the missions of the National Renewable Energy Laboratory (NREL) is to advance renewable energy research. So when it came time to build their new HPC datacenter, they decided to “walk the talk” and push the limits of energy-efficient supercomputing.
Well, so far, so good. With the first petascale system to use warm-water liquid cooling and reach an annualized average power usage effectiveness (PUE) rating of 1.06 or better, the new HPC data center ranks with the most efficient supercomputers in the world.
We took an integrated approach to the HPC system, the data center, and the building as part of the ESIF project,” said Steve Hammond, NREL’s Computational Science Center Director. “First, we wanted an energy-efficient HPC system appropriate for our workload. This is being supplied by HP and Intel. A new component-level liquid cooling system, developed by HP, will be used to keep computer components within safe operating range, reducing the number of fans in the backs of the racks.”
The first phase of the HPC installation began in November 2012, and the system will reach full capacity in the summer of 2013. Read the Full Story.
In the supercomputing world, we often think of things in tens. Over at TACC, Aaron Dubrow has posted a series of interviews with ten of the top HPC minds in Texas. With a focus what on what terascale, petascale, and exascale means to them and their field, the interviews cover a broad base of application user space.
Recently, a mouse with diminished interferon (proteins made and released by host cells in response to the presence of pathogens such as viruses, bacteria, parasites or tumor cells) was identified in our laboratory, said Bruce Beutler, Regental Professor and Director, Center for Genetics of Host Defense, UT Southwestern Medical Center. “Because we had sequenced the genomic DNA of its grandfather, we knew that the mouse likely had a mutation in a gene coding for the protein kinase TBK1. Without any mapping, we determined the exact cause of the phenotype. In former times, before it was possible to routinely sequence the genome of these mice, we would have spent months and thousands of dollars arriving at this same conclusion. This is a hint of the speed and dexterity that advanced computing can provide. It permits us to exclude obvious causes of phenotype and concentrate on what is new. Our research would be impossible without enormous computational resources.
Over at the HPC Notes, Andrew Jones has posted a rather tough quiz on supercomputing topics.
Can you name these supercomputers? I’m looking for actual machine names (e.g. ‘Sequoia’) and the host site (e.g. LLNL). Bonus points for the funding agency (e.g. DOE NNSA) and the machine type (e.g. IBM BlueGene/Q).
This is going to take some detective work to win, but Jones offers some hints in his Twitter stream. Read the Full Story and Enjoy!
Our own version of March Madness begins this week with news coverage of three back-to-back high performance computing events. We’ll bring you on-site interviews, presentations, and more from the following conferences:
HPC Advisory Council Switzerland Workshop. First up this week, we’re headed to beautiful Lugano for the annual three-day workshop. The have a great agenda lined up with talks on HPC essentials, new, emerging technologies, best practices, and hands-on training. Of course, we’ll bring you videos of as many of the presentations as we can right here on insideHPC!
GPU Technology Conference. GTC is the place to learn about and share how advances in GPU technology help scientists, developers, graphic artists, designers, researchers, engineers, and IT managers tackle their day-to-day computational and graphics challenges. At insideHPC, we’ll be featuring exclusive live-stream sessions from the conference, so tune-in right here starting Tuesday, March 19 at 9:00 am Pacific Time.
National HPCC Conference. One of the oldest conferences in HPC continues in Newport March 26-28 with its unique blend of education, training, networking, and partnership building. We’ll be taping key sessions on high performance computing topics with a focus on Big Data and Digital Manufacturing. Register now at: www.hpcc-usa.org
Our travel schedule is filling up for April as well, so check out what we have in store at our Featured Events page. Viva HPC!
Since 2012, PRACE has been offering European companies an industrial R&D service, based on a set of complementary high-level services including information and networking, training, access to leading HPC resources and expertise, and code-enabling of open-source applications. Now this documented project offer has been recognised as an effort in catalysing European industrial competitiveness.
The FP7 Success Story Competition highlights the three best success stories from the FP7 Capacities funding programme in e-Infrastructures. Project success stories in the competitive industry category show how the project made Europe a more attractive location to invest in research and innovation, by promoting activities where businesses set the agenda. The project is aimed at helping innovative SMEs to grow into world-leading companies.
Since the establishment of the PRACE Open R&D industrial offer in January 2012, PRACE has been able to attract more than 10 European companies; large companies as well as SMEs (Small and Medium Enterprises) for using its HPC facilities as well as the other high-value services.
This award in ‘competitive industries’ will foster our motivation to work on engaging industrial users on the PRACE research infrastructure in order to boost European competitiveness,” said Stephane Requena, author of the project paper and member of the board of directors of PRACE. “PRACE is working on increasing the use of a leading European infrastructure by all academic and industrial communities and is catalysing technological transfer between academia and industry through open innovation projects. In that sense we are working in the field of the FP7 funded PRACE-3IP implementation project on a tailored evangelisation programme called SHAPE (SME HPC Adoption Programme in Europe) which aims to help SMEs to co-design and demonstrate a concrete industrial project on PRACE facilities.”
Today OpenSFS announced that Tommy Minyard from TACC has been elected the Community Representative Director for the 2013 term. The term runs from March to March each year.
From the early days, TACC has been a major supporter of the work OpenSFS has done leading Lustre and other open source file systems development. I thank the OpenSFS board for this vote of confidence. I really look forward to contributing in this role, squarely focused on the community,” said Tommy Minyard, Director of Advanced Computing Systems at the Texas Advanced Computing Center (TACC). “Lustre has come a long way in the past two years, but we need to continue to keep the community in the forefront. The more involvement, the stronger the community gets.”
Minyard replaces Stephen Simms from Indiana University who has served as Community Representative Director for an extremely successful 2012 term. Read the Full Story.
Over at Crain’s Blogs, Joe Cahill writes that the current Federal budget impasse imperils cutting-edge work at the Chicago area’s biggest scientific centers including Fermilab, Argonne National Laboratory and NCSA in Champaign-Urbana.
Funding cuts to DOE’s basic science mission would be severe,” Energy Secretary Steven Chu warned earlier this month in a letter to Sen. Barbara Mikulski, D.-Md., who leads the Senate Appropriations Committee. Mr. Chu said sequestration would squeeze research funding, delay construction projects and generally curtail operations at DOE facilities around the country, including the national laboratories. That would come on top of cuts Fermilab already has made as a result of President Barack Obama’s proposed fiscal 2013 budget, which would reduce the lab’s funding by 8 percent. In response, the lab eliminated 49 jobs, or about 3 percent of its staff.
The HPC Midlands supercomputing facility will host their launch event in the U.K. on March 20. As a provider of state-of-the-art e-infrastructure for research and industry, HPC Midlands features a 3,000 core supercomputer combined with HPC expertise from Loughborough University and the University of Leicester.
Since establishing HPC Midlands with the financial backing of the Engineering and Physical Sciences Research Council, we have worked closely with academic colleagues and a range of industrial partners to refine the service to ensure that it meets business as well as academic needs,” said Dr Steven Kenny, Director of HPC Midlands. “Now we are ready to invite small and large businesses with specialist computing requirements to come along and see how they can benefit from this world-class facility.”
The launch event will give delegates the chance to meet the team behind HPC Midlands and explore opportunities for collaboration. Case study presentations will showcase how company’s like Tata Steel, E.ON, and Rolls Royce already benefit from working closely with HPC Midlands. Read the Full Story.
The National Renewable Energy Laboratory, located in the foothills of the Rocky Mountains in Golden, Colorado, is the nation’s primary laboratory for research and development of renewable energy and energy efficiency technologies. The NREL Computational Science Center (CSC) has an immediate opening for a High Performance Computing Systems Engineer. This senior position is responsible for implementing and operating HPC systems and related infrastructure in support of Science and Technical computing in support of NRELs mission.
Are you paying too much for your job ads?Not only do we offer ads for a fraction of what the other guys charge, our insideHPC Job Board is powered by SimplyHIred, the world’s largest job search engine.
As a reminder, we are offering FREE job listings for .EDU and .GOV domains, so email us at: info @ insideHPC.com for a special discount code.
In this video, Sean Wilkinson from the University of Alabama at Birmingham demonstrates QMachine, a web service that allows ordinary web browsers to execute distributed workloads, all without installing anything.
QMachine (QM) is a web service that uses Quanah to create a distributed computer that can use ordinary web browsers as ephemeral nodes. It contains three main components: an API server, a web server, and a website. The API server and the web server are both implemented in Node.js and available for use in server environments via NPM. The API server supports CORS and configurable persistent storage for a variety of popular databases, including Apache CouchDB, MongoDB, PostgreSQL, Redis, and SQLite.
The SC13 conference is seeking proposals for the Emerging Technologies Track, which is a new element of their Technical Program. Aimed at providing an exhibit showcase for novel projects at a national or international scale, the Emerging Technologies Track differs from other aspects of the technical program in that it will provide a forum for discussing large-scale, long-term efforts in HPC, networking, storage, and analysis.
Emerging Technologies welcomes exhibitions of real hardware prototypes and demonstrations of software as well as project presentations in poster form, animated displays, and scheduled presentations or discussions. Successful projects will display future technologies with the potential to influence computing and society as a whole.
Submissions are due July 31, 2013. Read the Full Story.
In this podcast from the Leonard Lopate Show, Author Viktor Mayer-Schönberger explores how Big Data will affect the economy, science, and society at large.
Big data” refers to our burgeoning ability to crunch vast collections of information, analyze it instantly, and draw sometimes profoundly surprising conclusions from it. Big Data: A Revolution that Will Transform How We Live, Work, and Think shows how this emerging science can translate myriad phenomena—from the price of airline tickets to the text of millions of books—into searchable form, and uses our increasing computing power to reach epiphanies that we never could have seen before.
In this video, Phil Webster, Director of Computational Information & Sciences at NASA describes how supercomputer resources power climate science.
The computer is the climate scientist’s tool — the better the tool, the better the scientific results, and the greater the understanding of what’s happening in the complete Earth system,” says Phil Webster, head of Goddard’s Computational and Information Sciences and Technology Office. “A key challenge for us is to build better machines because what we need doesn’t exist.