MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Agenda Posted for HPC User Forum in Tucson, April 11-13

IDC has published the agenda for their next HPC User Forum. The event will take place April 11-13 in Tucson, AZ. “Don’t miss the chance to hear top experts on these high-innovation, high-growth areas of the HPC market. At this meeting, you’ll also hear about government initiatives to get ready for future-generation supercomputers, machine learning, and High Performance Data Analytics.”

Chalk Talk: What is a Data Lake?

“If you think of a data mart as a store of bottled water – cleansed and packaged and structured for easy consumption – the data lake is a large body of water in a more natural state. The contents of the data lake stream in from a source to fill the lake, and various users of the lake can come to examine, dive in, or take samples.” These “data lake” systems will hold massive amounts of data and be accessible through file and web interfaces. Data protection for data lakes will consist of replicas and will not require backup since the data is not updated. Erasure coding will be used to protect large data sets and enable fast recovery. Open source will be used to reduce licensing costs and compute systems will be optimized for map reduce analytics. Automated tiering will be employed for performance and long-term retention requirements. Cold storage, storage that will not require power for long-term retention, will be introduced in the form of tape or optical media.”

SGI to Deliver Advanced Data Processing for Nagaoka University of Technology

Today SGI Japan announced that the Nagaoka University of Technology has selected the SGI UV 300, SGI UV 30EX and SGI Rackable servers and SGI InfiniteStorage 5600 for its next education and research integrated high-performance computing system. With a tenfold performance increase over the previous system, the new supercomputer will will start operation on March 1, 2016.

Compute Canada Sponsors Human Dimensions Open Data Challenge

Compute Canada is partnering with the Social Sciences and Humanities Research Council (SSHRC) to launch the first ever Human Dimensions Open Data Challenge. This challenge, led by social sciences and humanities researchers, will see research teams compete against one another using open-data sets to develop systems, processes, or fully-functional technology applications that address the human dimensions to key challenges in the natural resources and energy sectors. The Ontario Centres of Excellence, and ThinkData Works have also partnered on this project to provide additional resources and support.

Video: Accelerating Cognitive Workloads with Machine Learning

In this video, Ruchir Puri, an IBM Fellow at the IBM Thomas J. Watson Research Center talks about building large-scale big data systems and delivering real-time solutions such as using machine learning to predict drug reactions. “There is a need for systems that provide greater speed to insight — for data and analytics workloads to help businesses and organization make sense of the data, to outthink competitors as we usher in a new era of Cognitive Computing.”

Accelerating Machine Learning with Open Source Warp-CTC

Today Baidu’s Silicon Valley AI Lab (SVAIL) released Warp-CTC open source software for the machine learning community. Warp-CTC is an implementation of the #‎CTC algorithm for #‎CPUs and NVIDIA #‎GPUs. “According to SVAIL, Warp-CTC is 10-400x faster than current implementations. It makes end-to-end deep learning easier and faster so researchers can make progress more rapidly.”

Video: Bridges Supercomputer to be a Flexible Resource for Data Analytics

In this video, Nick Nystrom from PSC describes the new Bridges Supercomputer. Bridges sports a unique architecture featuring Hewlett Packard Enterprise (HPE) large-memory servers including HPE Integrity Superdome X, HPE ProLiant DL580, and HPE Apollo 2000. Bridges is interconnected by Intel Omni-Path Architecture fabric, deployed in a custom topology for Bridges’ anticipated workloads.

Deep Learning, Ocean Modeling, and HPCG Come to ASC16 Student Supercomputer Challenge

Today the ASC Student Supercomputer Challenge (ASC16) announced details from their Preliminary Contest on January 6. College students from around the world were asked to design a high performance computer system that optimizes HPCG and MASNUM_WAM applications under 3000W as well as to conduct a DNN performance optimization on a standalone hybrid CPU+MIC platform. All system designs along with the result and the code of the optimization application are to be submitted by March 2.

Call for Papers: ACM International Conference on Computing Frontiers

The 2016 ACM International Conference on Computing Frontiers has issued its Call for Papers. The event takes place May 16-18 in Como, Italy. “We seek contributions that push the envelope in a wide range of computing topics, from more traditional research in architecture and systems to new technologies and devices. We seek contributions on novel computing paradigms, computational models, algorithms, application paradigms, development environments, compilers, operating environments, computer architecture, hardware substrates, memory technologies, and smarter life applications.”

Brookhaven Lab Expands Computational Science Initiative

Today the Brookhaven National Laboratory announced that it has expanded its Computational Science Initiative (CSI). The programs within this initiative leverage computational science, computer science, and mathematics expertise and investments across multiple research areas at the Laboratory-including the flagship facilities that attract thousands of scientific users each year-further establishing Brookhaven as a leader in tackling the “big data” challenges at experimental facilities and expanding the frontiers of scientific discovery.