Today DataDirect Networks announced that University College London has selected DDN technolgy to provide up to 3,000 researchers with a safe and resilient storage solution for sharing, reusing and preserving project-based research data.
In an effort to better support researchers, UCL sought to remove the burden of storing and preserving research data from individual users. They selected the combination of DDN’s distributed WOS and GRIDScaler technology to provide the desired scalability, performance, reliability, portability and management simplicity.
DDN is empowering us to deliver performance and cost savings through a dramatically simplified approach. Add in the fact that DDN’s resilient, extensible storage technology provided evidence for seamless expansion from a half-petabyte to 100PBs, and we found exactly the foundation we were looking for.”
Today Enthought announced that the company has been awarded a $1M Small Business Innovation and Research grant by the United States Department of Energy to expand the capabilities of Python and NumPy for high-performance distributed computing.
The open-source Python HPC framework being developed under this Phase II SBIR will help address the growing need to easily access parallel computing resources by bringing the strengths and ease-of-use of the popular Python programming language and NumPy multidimensional arrays to high-performance and parallel computing. Comprised of three packages, the framework will address current issues hindering software computing on HPC systems: accessibility and ease-of use for non-computer scientists to leverage existing codes and resources for developing solutions, distributed array computing, and coding for node-level speed-up of computations. The first component will improve accessibility to the “The Trilinos Project”, a set of sophisticated algorithms and technologies used for solving large-scale, complex physics, engineering and scientific problems, such as those encountered in ocean modeling, Formula 1 race car design, nuclear engineering, digital dentistry and medical imaging. By wrapping key Trilinos packages in Python, a barrier-to-entry to Trilinos is removed for Python developers.
These Trilinos packages, developed primarily at Sandia National Laboratories, allow scientists to solve partial differential equations and large linear, nonlinear, and optimization problems in parallel, from desktops to distributed clusters to supercomputers, with active research on modern architectures such as GPUs,” states Bill Spotz, senior research scientist at Sandia. “This next phase of the project will improve and continue to expand these PyTrilinos interfaces, making Trilinos easier to use.” Spotz will lead the PyTrilinos effort for the Python HPC framework.
The system will have a hybrid configuration, composed of a Fujitsu supercomputer PrimeHPC FX10 and an HPC cluster comprised of Fujitsu Server PrimeRGY CX400. At deployment, it will have a theoretical peak performance of 561.4 teraflops, and will be scaled up in the future to 3,662.5 teraflops, making it one of the biggest systems in Japan and the largest in the Tokai region where Nagoya is situated.
The new system is due to start running from October 2013 and will be used for advanced research and academic purposes at Nagoya University’s Information Technology Center.
Nagoya University, the largest national university in the Tokai region and a center of academics and research there, is home to the Information Technology Center, a shared resource for universities and researchers conducting academic research throughout Japan. Since December 1981, numerous researchers have used the mainframe computers and supercomputers deployed there, mostly for work on science and technology.
The new system consolidates the Information Technology Center’s three existing systems: the supercomputer system, application server, and information-academics platform. It was designed to meet demands for more computing capacity, to make computing resources in other academic areas, to create new computational services, and to help educate people who will reach into new areas of inquiry.
In this video, the House Committee on Science, Space, and Technology’s Subcommittee on Energy holds a May 22 hearing to examine HPC research and development challenges and opportunities, specifically as they relate to exascale computing.
Testifying before the Subcommittee were Dr. Roscoe Giles, Chairman of the Advanced Scientific Computing Advisory Committee; Dr. Rick Stevens, Associate Laboratory Director for Computing, Environment and Life Sciences at Argonne National Laboratory; Ms. Dona Crawford, Associate Director for Computation at Lawrence Livermore National Laboratory; and Dr. Daniel Reed, Vice President for Research and Economic Development at the University of Iowa.
Exascale computing will be an important part of a larger effort to improve the U.S.’s overall high-end computing capability to address a broad range of academic, industrial, and national security needs. While research in next generation computing architecture and software continues to require strategic government investments, Members also explored the significant economic benefits that can arise from full utilization of existing high performance computing capabilities in ongoing scientific research.
In this NPR Podcast, Geoff Brumfiel takes a look at D-Wave quantum computer. Some skeptics are saying that the machine does not really work with the quantum effects that D-Wave claims it does.
It’s not exactly science, what they’re doing,” says Christopher Monroe, a physicist with the Joint Quantum Institute at the University of Maryland. “It’s high-level engineering, and I think it’s high-level salesmanship, too.” But Monroe remains skeptical. He believes that the D-Wave team has never demonstrated that entanglement is happening on the chips in its machine. He believes that D-Wave’s supposedly quantum bits are actually working instead as tiny electromagnets. Those magnets, Monroe believes, could be interacting in ways to solve a certain problem very quickly without quantum mechanics. “There’s no evidence that what they’re doing has anything to do with quantum mechanics,” he says. If he’s right, then D-Wave’s machine may be far more narrow in its abilities than the company believes.
Today the OpenFabrics Alliance (OFA) announced it will host two training classes at the University of New Hampshire Interoperability Lab. The two-day courses, instructed by Dr. Robert D. Russell and Rupert Dance, will provide attendees with hands-on training for managing InfiniBand Fabric in addition to coding applications directly to the VERBS API using OpenFabrics Enterprise Distribution (OFED) and related software tools.
Who Should Attend: System Administrators, System Architects, Planners and Technologists. A background in IP-based or Ethernet-based management is helpful and familiarity with the target operating system is assumed. This course does not assume RDMA specific knowledge and programming for RDMA is not a prerequisite.
This week IBM announced that the Philippine government has chosen an IBM Blue Gene/Q supercomputer to support R&D projects focused on reducing poverty, improving government processes, and enabling smarter weather management.
The IBM Blue Gene supercomputer will be most applicable to DOST’s major programs such as NOAH and Smart Agriculture, “said DOST Secretary Mario G. Montejo. “First we will work toward Blue Gene’s integration to Project NOAH to provide more advanced seven-day local weather forecasts. We can also use it to run various weather models and validate the accuracy of results almost real-time.”
A new whitepaper from Intel looks at Truescale InfiniBand performance for HPC applications.
There are two types of InfiniBand architectures available today in the marketplace, the first being the traditional InfiniBand design, created as a channel interconnect for the data center. The latest InfiniBand architecture was built with HPC in mind. This enhanced HPC fabric offering is optimized for key interconnect performance factors, featuring MPI message rating, end-to-end latency and collective performance, resulting in increased HPC application performance. enhanced intel True Scale Fabric Architecture – Offers 3x to 17x the MPI (Message Passing Interface) message throughput of the other InfiniBand architecture. For many MPI applications, small message rate throughput is an important factor that contributes to overall performance and scalability.
Intel tested a number of MPI applications and found that they performed up to 11 percent better on the cluster based Intel True Scale Fabric QDR-40 (dual-channel) than the traditional InfiniBand-based architecture running at FDR (56 Gbps). Download the whitepaper (PDF).
The Notur project fulfills a pronounced and sustainable vision for a Norwegian infrastructure for High Performance Computing (HPC) and computational science. The vision of the project is to provide a modern, national HPC infrastructure in an international and competitive setting, and stimulate computational science as the third scientific path. The project serves the Norwegian computational science community by providing the infrastructure to individuals or groups involved in education and research at Norwegian universities and colleges, and research and engineering at research institutes and industry who contribute to the funding of Notur.
Over at the Student Cluster Competition Blog, Dan Olds writes that the first Battle of Liepzig in 1813 can’t really compare to what’s coming in June: The Second Battle of Leipzig, aka the ISC’13 Klusterkampf. Now in its second year, the ISC Student Cluster competition will feature nine teams of university undergrad students in a quest to build the fastest supercomputer on the show floor.
Chemnitz University of Technology is only 51 miles from Leipzig and thus the hometown favorite. The school, founded in 1836, was a fast starter in terms of science and technology. During one pre-1900 period, Chemnitz generated more patent registrations than any other institution in the world. Currently, the university is ranked near the top of German technical schools, a reputation they’ll be putting on the line at the student cluster challenge. Judging by their entry application, Chemnitz (or TUC, which stands for Technische Universitat Chemnitz) has a deep HPC curriculum, including courses and research on FPGA and GPU application acceleration. Team TUC has partnered with German hardware vendor MEGWARE GmbH to build “TurboTUC,” the schlachtkreuzer they hope to ride to victory.
IDC will once again sponsor its annual HPC Breakfast Briefing at ISC’13 on June 18 in Hall 4 of the Leipzig Congress Center. The event will feature the latest HPC revenue numbers, market forecast and trends; international competition and initiatives, and exascale plans as well as awards for ROI in high performance computing.
Attendance and full breakfast are complimentary, so be sure to Register Now.
In this video from the 2013 HPC User Forum, Burak Yenier presents: The HPC Experiment – Paving the way to HPC as a Service.
For the 2nd Round of the HPC experiment, we will apply the cloud computing service model to workloads on remote Cluster Computing resources in the areas of HPC, Computer Aided Engineering, and the Life Sciences.
In related news, the HPC Experiment site has just added an online exhibit area as one-stop interactive service directory for Cloud users and service providers with focus on High Performance Computing, Big Data, Digital Manufacturing, and Computational Life Sciences.