Today Deloitte Advisory and Cray introduced the first commercially available high-speed, supercomputing threat analytics service. Called Cyber Reconnaissance and Analytics, the subscription-based offering is designed to help organizations effectively discover, understand and take action to defend against cyber adversaries.
The first Joint International Workshop on Parallel Data Storage and Data Intensive Scalable Computing Systems (PDSW-DISCS’16) has issued its Call for Papers. As a one day event held in conjunction with SC16, the workshop will combine two overlapping communities to to address some of the most critical challenges for scientific data storage, management, devices, and processing infrastructure. To learn more, we caught up with workshop co-chairs Dean Hildebrand (IBM) and Shane Canon (LBNL).
“Data Science and Information Systems researchers at UQ are tackling the challenges of big data, real-time analytics, data modeling and smart information use. The cutting-edge solutions developed at UQ will lead to user empowerment at an individual, corporate and societal level. Our researchers are making a sustained and influential contribution to the management, modeling, governance, integration, analysis and use of very large quantities of diverse and complex data in an interconnected world.”
In this video from ISC 2016, Gabriel Broner from SGI describes the company’s innovative solutions for high performance computing. “As the trusted leader in high performance computing, SGI helps companies find answers to the world’s biggest challenges. Our commitment to innovation is unwavering and focused on delivering market leading solutions in Technical Computing, Big Data Analytics, and Petascale Storage. Our solutions provide unmatched performance, scalability and efficiency for a broad range of customers.”
In this video from the PASC16 conference, Andrew Lumsdaine from Indiana University presents: Context Matters: Distributed Graph Algorithms and Runtime Systems. “The increasing complexity of the software/hardware stack of modern supercomputers makes understanding the performance of the modern massive-scale codes difficult. Distributed graph algorithms (DGAs) are at the forefront of that complexity, pushing the envelope with their massive irregularity and data dependency. We analyze the existing body of research on DGAs to assess how technical contributions are linked to experimental performance results in the field. We distinguish algorithm-level contributions related to graph problems from “runtime-level” concerns related to communication, scheduling, and other low-level features necessary to make distributed algorithms work. We show that the runtime is an integral part of DGAs’ experimental results, but it is often ignored by the authors in favor of algorithm-level contributions.”
In this video from ISC 2016, Dr. Eng Lim Goh from SGI discusses the latest trends in high performance data analytics and machine learning. “Dr. Eng Lim Goh joined SGI in 1989, becoming a chief engineer in 1998 and then chief technology officer in 2000. He oversees technical computing programs with the goal to develop the next generation computer architecture for the new many-core era. His current research interest is in the progression from data intensive computing to analytics, machine learning, artificial specific to general intelligence and autonomous systems. Since joining SGI, he has continued his studies in human perception for user interfaces and virtual and augmented reality.”
OCF in the UK reports that the company continues to expand its operations. The high performance computing integrator is recruiting a number of new staff to meet the growing appetite and demand for HPC and data analytics solutions across universities, research institutes and commercial businesses in the UK.
“We live in an era in which the creation of new data is growing exponentially such that every two days we create as much new data as we did from the beginning of mankind until the year 2003. One of the greatest scientific challenges of the 21st century is to effectively understand and make use of the vast amount of information being produced. Visual data analysis will be among our most important tools to understand such large and often complex data. In this talk, I will present state-of-the-art visualization techniques, applied to important Big Data problems in science, engineering, and medicine.”
Today the Transaction Processing Performance Council (TPC) today announced the immediate availability of the TPCx-BB benchmark. The benchmark is designed to measure the performance of Hadoop based systems including MapReduce, Apache Hive, and Apache Spark Machine Learning Library (MLlib).
Today Mellanox announced the BlueField family of programmable processors for networking and storage applications. “As a networking offload co-processor, BlueField will complement the host processor by performing wire-speed packet processing in-line with the network I/O, freeing the host processor to deliver more virtual networking functions (VNFs),” said Linley Gwennap, principal analyst at the Linley Group. “Network offload results in better rack density, lower overall power consumption, and deterministic networking performance.”