The Notur project fulfills a pronounced and sustainable vision for a Norwegian infrastructure for High Performance Computing (HPC) and computational science. The vision of the project is to provide a modern, national HPC infrastructure in an international and competitive setting, and stimulate computational science as the third scientific path. The project serves the Norwegian computational science community by providing the infrastructure to individuals or groups involved in education and research at Norwegian universities and colleges, and research and engineering at research institutes and industry who contribute to the funding of Notur.
Over at the Student Cluster Competition Blog, Dan Olds writes that the first Battle of Liepzig in 1813 can’t really compare to what’s coming in June: The Second Battle of Leipzig, aka the ISC’13 Klusterkampf. Now in its second year, the ISC Student Cluster competition will feature nine teams of university undergrad students in a quest to build the fastest supercomputer on the show floor.
Chemnitz University of Technology is only 51 miles from Leipzig and thus the hometown favorite. The school, founded in 1836, was a fast starter in terms of science and technology. During one pre-1900 period, Chemnitz generated more patent registrations than any other institution in the world. Currently, the university is ranked near the top of German technical schools, a reputation they’ll be putting on the line at the student cluster challenge. Judging by their entry application, Chemnitz (or TUC, which stands for Technische Universitat Chemnitz) has a deep HPC curriculum, including courses and research on FPGA and GPU application acceleration. Team TUC has partnered with German hardware vendor MEGWARE GmbH to build “TurboTUC,” the schlachtkreuzer they hope to ride to victory.
Read the Full Story.
IDC will once again sponsor its annual HPC Breakfast Briefing at ISC’13 on June 18 in Hall 4 of the Leipzig Congress Center . The event will feature the latest HPC revenue numbers, market forecast and trends; international competition and initiatives, and exascale plans as well as an awards for ROI in high performance computing.
Attendance and full breakfast are complimentary, so be sure to Register Now.
In this video from the 2013 HPC User Forum, Burak Yenier presents: The HPC Experiment – Paving the way to HPC as a Service.
For the 2nd Round of the HPC experiment, we will apply the cloud computing service model to workloads on remote Cluster Computing resources in the areas of HPC, Computer Aided Engineering, and the Life Sciences.
In related news, the HPC Experiment site has just added an online exhibit area as one-stop interactive service directory for Cloud users and service providers with focus on High Performance Computing, Big Data, Digital Manufacturing, and Computational Life Sciences.
While we may not get to Exascale by 2020, ground-breaking compute technologies for the SKA telescope are already under development (without involvement of the U.S. Government, by the way). In this video from the 2013 HPC User Forum, Ronald P. Luijten from IBM Research presents: The IBM-DOME Microserver Demonstrator.
The computational and storage demands for the future Square Kilometer Array (SKA) radio telescope are signiﬁcant. Building on the experience gained with the collaboration between ASTRON and IBM with the Blue Gene based LOFAR correlator, ASTRON and IBM have now embarked on a public-private exascale computing research project aimed at solving the SKA computing challenges. This project, called DOME, investigates novel approaches to exascale computing, with a focus on energy efficient, streaming data processing, exascale storage, and nano-photonics. DOME will not only beneﬁt the SKA, but will also make the knowledge gained available to interested third parties via a Users Platform. The intention of the DOME project is to evolve into the global center of excellence for transporting, processing, storing and analyzing large amounts of data for minimal energy cost.”
SC13, the international conference for high-performance computing, networking, storage and analysis, is accepting nominations for three distinguished awards that will be presented at the conference in November.
The IEEE Seymour Cray Computer Science and Engineering Award, the IEEE Sidney Fernbach Memorial Award and the ACM-IEEE Ken Kennedy Award will be announced at SC13, to be held from 17 to 22 November at the Colorado Convention Center, US. Nominations should be made via the SC13 website.
Established in 1997, the IEEE Computer Society Seymour Cray Computer Engineering Award recognises innovative contributions to high-performance computing systems that best exemplify the creative spirit demonstrated by Seymour Cray. Previous winners have been recognised for design, engineering and intellectual leadership in creating innovative and successful HPC systems.
The IEEE Computer Society Sidney Fernbach Award was established in 1992 in honour of Sidney Fernbach, one of the pioneers in the development and application of high-performance computers for solving large computational problems. Nominations that recognise creation of widely-used and innovative software packages, application software and tools are especially solicited. The Fernbach award winner receives a certificate and $2,000.
The ACM/IEEE Ken Kennedy Award, established in 2009, is presented for outstanding contributions to programmability or productivity in computing, together with significant community service or mentoring contributions. The award was established in memory of Ken Kennedy, the founder of Rice University’s nationally ranked computer science program and one of the world’s foremost experts on high-performance computing. Awardees receive a certificate and a $5,000 honorarium.
In this video from the 2013 HPC User Forum, Scott Schultz from Mellanox presents an overview of Mellanox and HPC.
Argonne National Lab just wrapped up a two-day event celebrating 30 years of parallel computing. The event hosted many of the visionaries at the lab and at other institutions who initiated and contributed to Argonne’s history of advancing parallel computing and computational science.
Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future.
The tradition continues as Argonne explores new paths and paves the way toward exascale computing. Read the Full Story.
In this video from the 2013 HPC User Forum, Stephen Wheat from Intel presents: Future Directions for IA … and more.
You can check out more presentations at the HPC User Forum Video Gallery.
In this video from the 2013 HPC User Forum, John Hengeveld from Intel presents: Big Data Use Cases – The Size of the Data does not define Big Data.
In this video from the 2013 HPC User Forum, Don Lamb from the University of Chicago presents: HPC in Astrophysics.