“The recent announcement of HDR InfiniBand included the three required network elements to achieve full end-to-end implementation of the new technology: ConnectX-6 host channel adapters, Quantum switches and the LinkX family of 200Gb/s cables. The newest generations of InfiniBand bring the game changing capabilities of In-Network Computing and In-Network Memory to further enhance the new paradigm of Data-Centric data centers – for High-Performance Computing, Machine Learning, Cloud, Web2.0, Big Data, Financial Services and more – dramatically increasing network scalability and introducing new accelerations for storage platforms and data center security.”
Today the Canada Foundation for Innovation announced an award of $69,455,000 through its Major Science Initiative Fund for the Compute Canada project. This award will be used to continue the operation of the national advanced research computing platform that serves more than 10,000 researchers at universities, post-secondary institutions and research institutions across Canada.
The Pacific Northwest National Laboratory (PNNL) is seeking a Research Scientist for High Performance Computing in our Job of the Week. “The HPC group is seeking a Scientist to actively participate in challenging software and hardware research projects that will impact future High Performance Computing systems as well as constituent technologies. In particular, the researcher will be involved in research into data analytics, large-scale computation, programming models, and introspective run-time systems. The successful researcher will join a vibrant research group whose core capabilities are in Modeling and Simulation, System Software and Applications, and Advanced Architectures.”
In this special guest feature from Scientific Computing World, Cray’s Barry Bolding gives some predictions for the supercomputing industry in 2017. “2016 saw the introduction or announcement of a number of new and innovative processor technologies from leaders in the field such as Intel, Nvidia, ARM, AMD, and even from China. In 2017 we will continue to see capabilities evolve, but as the demand for performance improvements continues unabated and CMOS struggles to drive performance improvements we’ll see processors becoming more and more power hungry.”
A new study led by a research scientist at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) highlights a literally shady practice in plant science that has in some cases underestimated plants’ rate of growth and photosynthesis, among other traits. “More standardized fieldwork, in parallel with new computational tools and theoretical work, will contribute to better global plant models,” Keenan said.
In this video, a new NASA supercomputer simulation depicts the planet and debris disk around the nearby star Beta Pictoris reveals that the planet’s motion drives spiral waves throughout the disk, a phenomenon that greatly increases collisions among the orbiting debris. Patterns in the collisions and the resulting dust appear to account for many observed features that previous research has been unable to fully explain.
A new site developed by Tin H compares the HPC virtualization capabilities of Docker, Singularity, Shifter, and Univa Grid Engine Container Edition. “They bring the benefits of container to the HPC world and some provide very similar features. The subtleties are in their implementation approach. MPI maybe the place with the biggest difference.”
Thomas Schulthess from CSCS gave this Invited Talk at SC16. “Experience with today’s platforms show that there can be an order of magnitude difference in performance within a given class of numerical methods – depending only on choice of architecture and implementation. This bears the questions on what our baseline is, over which the performance improvements of Exascale systems will be measured. Furthermore, how close will these Exascale systems bring us to deliver on application goals, such as kilometer scale global climate simulations or high-throughput quantum simulations for materials design? We will discuss specific examples from meteorology and materials science.”
Are you planning for ISC 2017? The deadlines for submissions are fast approaching. The conference takes place June 18 – 22, 2017 in Frankfurt, Germany. “Participation in these sessions and programs will help enrich your experience at the conference, not to mention provide exposure for your work to some of the most discerning HPC practitioners and business people in the industry. We also want to remind you that it’s the active participation of the community that helps make ISC High Performance such a worthwhile event for all involved.”
“Managing the work on each node can be referred to as Domain parallelism. During the run of the application, the work assigned to each node can be generally isolated from other nodes. The node can work on its own and needs little communication with other nodes to perform the work. The tools that are needed for this are MPI for the developer, but can take advantage of frameworks such as Hadoop and Spark (for big data analytics). Managing the work for each core or thread will need one level down of control. This type of work will typically invoke a large number of independent tasks that must then share data between the tasks.”