Today D-Wave Systems announced the launch of Quantum for Quants, an online community designed specifically for quantitative analysts and other experts focused on complex problems in finance. Launched at the Global Derivatives Trading & Risk Management conference in Budapest, the online community will allow quantitative finance and quantum computing professionals to share ideas and insights regarding quantum technology and to explore its application to the finance industry. Through this community financial industry experts will also be granted access to quantum computing software tools, simulators, and other resources and expertise to explore the best ways to tackle the most difficult computational problems in finance using entirely new techniques.
In this video from the 2016 OpenPOWER Summit, Stephen Bates of Microsemi presents: Enabling high-performance storage on OpenPOWER Systems. “Non-Volatile Memory (NVM), and the low latency access to storage it provides, is changing the compute stack. NVM Express is the de-facto protocol for communicating with local NVM attached over the PCIe interface. In this talk we will demonstrate performance data for extermely low-latency NVMe devices operating inside OpenPOWER systems. We will discuss the implications of this for applications like in-memory databases, analytics and cognitive computing. In addition we will present data on the emerging NVMe over Fabrics (NVMf) protocol running on OpenPOWER systems.”
Intel has been working on a new HPC design philosophy for HPC systems called Intel® Scalable System Framework (Intel® SSF), an approach designed to enable sustained, balanced performance in HPC as the community pushes towards the Exascale computing era. Central to Intel SSF performance is the Lustre* scalable, parallel file system (PFS). Intel® Enterprise Edition for Lustre software (Intel® EE for Lustre software) is the Intel distribution of the well-known PFS, which is used by the majority of the fastest supercomputers around the world.
Successful HPC computing depends on choosing the architecture that addresses both application and institutional needs. In particular, finding a simple path to leading edge HPC and Data Analytics is not difficult, if you consider the capabilities and limitations of various approaches to HPC performance, scaling, ease of use, and time to solution. Careful analysis and consideration of the following questions will help lead to a successful and cost-effective HPC solution. Here are three questions to ask to ensure HPC success.
In this video from the 2016 MSST Conference, Harriet Coverston from Versity presents: Versity – Archiving to Objects. “Introducing Versity Storage Manager – an enterprise-class storage virtualization and archiving system that runs on Linux. Offering comprehensive data management for tiered storage environments and the ability to preserve and protect your data forever. Maximum protection at a minimum cost. Versity supports nearly unlimited volumes of storage and offers the most robust archive policy engine on the market.”
There is still time to take advantage of Early Bird registration rates for ISC 2016. You can save over 45 percent off the on-site registration rates if you sign up by May 11. “ISC 2016 takes place June 19-23 in Frankfurt, Germany. With an expected attendance of 3,000 participants from around the world, ISC will also host 146 exhibitors from industry and academia.”
In this special guest feature from Scientific Computing World, Darren Watkins from Virtus Data Centres explains the importance of building a data centre from the ground up to support the requirements of HPC users – while maximizing productivity, efficiency and energy usage. “The reality for many IT users is they want to run analytics that –with the growth of data – have become too complex and time critical for normal enterprise servers to handle efficiently.”
Mark Seamans from SGI presented this talk at the HPC User Forum in Tucson. “As the trusted leader in high performance computing, SGI helps companies find answers to the world’s biggest challenges. Our commitment to innovation is unwavering and focused on delivering market leading solutions in Technical Computing, Big Data Analytics, and Petascale Storage. Our solutions provide unmatched performance, scalability and efficiency for a broad range of customers.”
Leo Reiter from Nimbix presented this deck at the HPC User Forum. “Nimbix is a pure high performance computing cloud built for volume, speed and simplicity. We give people the tools and the processing power to solve their biggest, toughest problems. We give you the freedom to imagine new possibilities, to test the limits of reality, and to model the future. For most workloads, Nimbix is far less expensive than building, running and maintaining your own supercomputer. It’s also more efficient at spinning up, executing, completing the job and delivering your results — which saves you time and money. And our user-friendly platform means you invest less in development and infrastructure.”
In this RCE Podcast, Marcel Kornacker from Cloudera describes the Impala project. Impala brings scalable parallel database technology to Hadoop, enabling users to issue low-latency SQL queries to data stored in HDFS and Apache HBase without requiring data movement or transformation. Impala is integrated with Hadoop to use the same file and data formats, metadata, security and resource management frameworks used by MapReduce, Apache Hive, Apache Pig and other Hadoop software.