The Raspberry Pi is a credit-card-sized single-board computer developed in the UK by the Raspberry Pi Foundation with the intention of promoting the teaching of basic computer science in schools. But could this ARM-based device be used to teach supercomputing as well? Joshua Kiepert, a doctoral student at Boise State’s Electrical and Computer Engineering department, published a white paper entitled: Creating a Raspberry Pi-Based Beowulf Cluster.
Although an inexpensive bill of materials looks great on paper, cheaper parts come with their own set ofdownsides. Perhaps the biggest downside is that an RPi is no where near as powerful as a current x86 PC. The RPi has a single-core ARM1176 (ARMv6) processor, running at 700MHz (though overclockingis supported). Additionally, since the RPi uses an ARM processor, it has a different architecture than PCs, i.e. ARM vs x86. Thus, any MPI program created originally on x86 must be recompiled when deployed to the RPiCluster. Fortunately, this issue is not present for java, python, or perl programs. Finally, because of the limited processing capability, the RPiCluster will not support multiple users simultaneously using the system very well. As such, it would be necessary to create some kind of timesharing system for access if it ever needed to be used in such a capacity.
Our customers operate technical computing environments where infrastructure software like Univa Grid Engine is a key component. This partnership allows us to support our customers on all levels, giving them more options to use their compute clusters in the most efficient manner,” says Gerd-Lothar Leonhart, CEO of s+c. “Additionally, the possibility to integrate Univa Grid Engine with Hadoop systems opens up new opportunities to optimize the usage of Big Data installations.”
In this podcast from WBEZ in Chicago, Pete Beckman from Argonne explains how math and supercomputers are accelerating scientific discovery and helping us predict the future. From discovering the secret inner workings of the universe to developing cars that can drive themselves, technology and science are fueling a new breed of massive, smart supercomputers that will improve our world.
Over at the ISC Blog, Thomas Lippert from the Jülich Supercomputing Centre writes that the DEEP project is about to demonstrate that the pitfalls of Amdahl’s law can be avoided in specific situations.
The applications adapted to DEEP are selected in order to investigate and demonstrate the usefulness of the combination of hardware, system software and the programming model to leave ground and leap beyond the limits of Amdahl’s law of parallel computing. We are eager to show our first results at the ISC’13 in Leipzig.
Over at Tom’s Hardware, Niels Broekhuijsen writes that new information has surfaced regarding Intel’s upcoming Xeon Phi coprocessors.
Intel’s product database has been updated, and it now shows five new Xeon Phi co-processors. These five are followups of the original Xeon 5110P, SE10P, and SE10X models. Two lighter Xeon Phi 3100 parts have shown up: a mid-end part, the 5120D, and two premium 7100 series parts. The main differences between the current Xeon Phi co-processors and the previous ones are the Xeon CPUs that are aboard, as well as the cooling blocks. Any model with the extension “*P” in the name has the passively cooled cooler, while others have the active drum cooler. The “*D” will not ship with a cooler.
If rumors hold, the new Xeon Phi coprocessors may hit the market this month. Read the Full Story.
In this slidecast, Rainer Enders from NCP Engineering presents: Debunking the Myths of SSL VPN Security.
The NCP Secure Enterprise Solution provides a set of software products that enable complete policy freedom, unlimited scaling, multiple VPN-system setup and control, and total end-to-end security. Practically speaking, one administrator is able to handle 10,000+ secure remote users through all phases.”
Today DataDirect Networks announced that University College London has selected DDN technolgy to provide up to 3,000 researchers with a safe and resilient storage solution for sharing, reusing and preserving project-based research data.
In an effort to better support researchers, UCL sought to remove the burden of storing and preserving research data from individual users. They selected the combination of DDN’s distributed WOS and GRIDScaler technology to provide the desired scalability, performance, reliability, portability and management simplicity.
DDN is empowering us to deliver performance and cost savings through a dramatically simplified approach. Add in the fact that DDN’s resilient, extensible storage technology provided evidence for seamless expansion from a half-petabyte to 100PBs, and we found exactly the foundation we were looking for.”
Today Enthought announced that the company has been awarded a $1M Small Business Innovation and Research grant by the United States Department of Energy to expand the capabilities of Python and NumPy for high-performance distributed computing.
The open-source Python HPC framework being developed under this Phase II SBIR will help address the growing need to easily access parallel computing resources by bringing the strengths and ease-of-use of the popular Python programming language and NumPy multidimensional arrays to high-performance and parallel computing. Comprised of three packages, the framework will address current issues hindering software computing on HPC systems: accessibility and ease-of use for non-computer scientists to leverage existing codes and resources for developing solutions, distributed array computing, and coding for node-level speed-up of computations. The first component will improve accessibility to the “The Trilinos Project”, a set of sophisticated algorithms and technologies used for solving large-scale, complex physics, engineering and scientific problems, such as those encountered in ocean modeling, Formula 1 race car design, nuclear engineering, digital dentistry and medical imaging. By wrapping key Trilinos packages in Python, a barrier-to-entry to Trilinos is removed for Python developers.
These Trilinos packages, developed primarily at Sandia National Laboratories, allow scientists to solve partial differential equations and large linear, nonlinear, and optimization problems in parallel, from desktops to distributed clusters to supercomputers, with active research on modern architectures such as GPUs,” states Bill Spotz, senior research scientist at Sandia. “This next phase of the project will improve and continue to expand these PyTrilinos interfaces, making Trilinos easier to use.” Spotz will lead the PyTrilinos effort for the Python HPC framework.
The system will have a hybrid configuration, composed of a Fujitsu supercomputer PrimeHPC FX10 and an HPC cluster comprised of Fujitsu Server PrimeRGY CX400. At deployment, it will have a theoretical peak performance of 561.4 teraflops, and will be scaled up in the future to 3,662.5 teraflops, making it one of the biggest systems in Japan and the largest in the Tokai region where Nagoya is situated.
The new system is due to start running from October 2013 and will be used for advanced research and academic purposes at Nagoya University’s Information Technology Center.
Nagoya University, the largest national university in the Tokai region and a center of academics and research there, is home to the Information Technology Center, a shared resource for universities and researchers conducting academic research throughout Japan. Since December 1981, numerous researchers have used the mainframe computers and supercomputers deployed there, mostly for work on science and technology.
The new system consolidates the Information Technology Center’s three existing systems: the supercomputer system, application server, and information-academics platform. It was designed to meet demands for more computing capacity, to make computing resources in other academic areas, to create new computational services, and to help educate people who will reach into new areas of inquiry.
In this video, the House Committee on Science, Space, and Technology’s Subcommittee on Energy holds a May 22 hearing to examine HPC research and development challenges and opportunities, specifically as they relate to exascale computing.
Testifying before the Subcommittee were Dr. Roscoe Giles, Chairman of the Advanced Scientific Computing Advisory Committee; Dr. Rick Stevens, Associate Laboratory Director for Computing, Environment and Life Sciences at Argonne National Laboratory; Ms. Dona Crawford, Associate Director for Computation at Lawrence Livermore National Laboratory; and Dr. Daniel Reed, Vice President for Research and Economic Development at the University of Iowa.
Exascale computing will be an important part of a larger effort to improve the U.S.’s overall high-end computing capability to address a broad range of academic, industrial, and national security needs. While research in next generation computing architecture and software continues to require strategic government investments, Members also explored the significant economic benefits that can arise from full utilization of existing high performance computing capabilities in ongoing scientific research.
In this NPR Podcast, Geoff Brumfiel takes a look at D-Wave quantum computer. Some skeptics are saying that the machine does not really work with the quantum effects that D-Wave claims it does.
It’s not exactly science, what they’re doing,” says Christopher Monroe, a physicist with the Joint Quantum Institute at the University of Maryland. “It’s high-level engineering, and I think it’s high-level salesmanship, too.” But Monroe remains skeptical. He believes that the D-Wave team has never demonstrated that entanglement is happening on the chips in its machine. He believes that D-Wave’s supposedly quantum bits are actually working instead as tiny electromagnets. Those magnets, Monroe believes, could be interacting in ways to solve a certain problem very quickly without quantum mechanics. “There’s no evidence that what they’re doing has anything to do with quantum mechanics,” he says. If he’s right, then D-Wave’s machine may be far more narrow in its abilities than the company believes.
Today the OpenFabrics Alliance (OFA) announced it will host two training classes at the University of New Hampshire Interoperability Lab. The two-day courses, instructed by Dr. Robert D. Russell and Rupert Dance, will provide attendees with hands-on training for managing InfiniBand Fabric in addition to coding applications directly to the VERBS API using OpenFabrics Enterprise Distribution (OFED) and related software tools.
Who Should Attend: System Administrators, System Architects, Planners and Technologists. A background in IP-based or Ethernet-based management is helpful and familiarity with the target operating system is assumed. This course does not assume RDMA specific knowledge and programming for RDMA is not a prerequisite.
This week IBM announced that the Philippine government has chosen an IBM Blue Gene/Q supercomputer to support R&D projects focused on reducing poverty, improving government processes, and enabling smarter weather management.
The IBM Blue Gene supercomputer will be most applicable to DOST’s major programs such as NOAH and Smart Agriculture, “said DOST Secretary Mario G. Montejo. “First we will work toward Blue Gene’s integration to Project NOAH to provide more advanced seven-day local weather forecasts. We can also use it to run various weather models and validate the accuracy of results almost real-time.”