Today Intel announced plans to acquire startup Nervana Systems as part of an effort to bolster the company’s artificial intelligence capabilities. “Nervana has a fully-optimized software and hardware stack for deep learning,” said Intel’s Diane Bryant in a blog post. “Their IP and expertise in accelerating deep learning algorithms will expand Intel’s capabilities in the field of AI. We will apply Nervana’s software expertise to further optimize the Intel Math Kernel Library and its integration into industry standard frameworks
“The ExaFlash Platform is an historic achievement that will reshape the storage and data center industries,” said Thomas Isakovich, CEO and Founder of Nimbus Data. “It offers unprecedented scale (from terabytes to exabytes), record-smashing efficiency (95% lower power and 50x greater density than existing all-flash arrays), and a breakthrough price point (a fraction of the cost of existing all-flash arrays). ExaFlash brings the all-flash data center dream to reality and will help empower humankind’s innovation for decades to come.”
Today Bright Computing announced that the Electronics Research Institute (ERI) and Brightskies Technologies have chosen the full suite of Bright technology to manage its HPC, big data, and cloud infrastructure. “Using Bright Computing’s technologies, we were able to showcase how to provision a virtual HPC cluster or big data cluster over cloud as extensions to the existing cluster or on demand as per users’ requests,” said Dr. Khaled Elamrawi, President of Brightskies Technologies. “This was very powerful and clearly addressed the challenges that ERI were facing.”
Olaf Weber from SGI presented this talk at LUG 2016. “In collaboration with Intel, SGI set about creating support for multiple network connections to the Lustre filesystem, with multi-rail support. With Intel Omni-Path and EDR Infiniband driving to 200Gb/s or 25GB/s per connection, this capability will make it possible to start moving data between a single SGI UV node and the Lustre file system at over 100GB/s.”
Deep learning is a method of creating artificial intelligence systems that combine computer-based multi-layer neural networks with intensive training techniques and large data sets to enable analysis and predictive decision making. A fundamental aspect of deep learning environments is that they transcend finite programmable constraints to the realm of extensible and trainable systems. Recent developments in technology and algorithms have enabled deep learning systems to not only equal but to exceed human capabilities in the pace of processing vast amounts of information.
In this CCTV video, Alberto Alonso describes his new supercomputer, Breogan, that has the ability to modernize Mexico’s antiquated stock market.
Since it was launched in February, the Breogan computer has generated $475,000 for his company GACS. The algorithm it uses is much faster than the computers in Mexico’s stock exchange. “What makes the Breogan computer so unique, is that it finds attractive opportunities in the market and it buys and sells automatically when it sees a trading opportunity.”
“So one of the things that we’ve really been very proud of, in terms of our progress, particularly in EMEA over the last 12 months, is we’ve deployed a number of really significant systems. If you remember when we were back together actually at SC15 in Austin. One of the big pieces of news that we were very proud of was our presence in the top 10, 4 of them are actually powered by Seagate. Even more impressive is that 100% of the newest systems are powered by Seagate. When you peel that layer back just a little bit further, actually three of those four systems are actually from Europe and the Middle East.”
This whitepaper is an excellent summary of how a next generation platform can be developed to bring a wide range of data to life, giving users the ability to take action when needed. Organizations that need to deal with massive amounts of data but are having challenges figuring out how to make sense of all of the data should read this whitepaper.
Today’s HPC supercomputers have significant power requirements that must be considered as part of their Total Cost of Ownership. In addition, efficient power management capabilities are critical to sustained return on investment.
“Organizations who are currently employing high performance computing to advance their competitiveness and innovation in the global marketplace can highlight their compelling/interesting/novel real-world applications at SC16’s HPC Impact Showcase. The Showcase is designed to introduce attendees to the many ways that HPC matters in our world, through testimonials from companies large and small. Rather than a technical deep dive of how they are using or managing their HPC environments, their stories are meant to tell how their companies are adopting and embracing HPC as well as how it is improving their businesses. Last year’s line-up included presentations on topics from battling ebola to designing at Rolls-Royce. It is not meant for marketing presentations. Whether you are new to HPC or a long-time professional, you are sure to learn something new and exciting in the HPC Impact Showcase.”