Today One Stop Systems announced the 4U Flash Storage Array with Mangstor MX6300 NVMe SSDs. OSS’ FSAe-4 can accommodate 32 of the MX6300 providing up to 172TB of shared Flash storage. The FSAe-4 is a fully redundant, hot serviceable configuration with 4 independent 1U servers attached to the PCIe expansion chassis. The expansion system can support Ethernet (RoCE) or Infiniband fabrics and network speeds up to 100Gb/s.
Today Intel announced plans to acquire startup Nervana Systems as part of an effort to bolster the company’s artificial intelligence capabilities. “Nervana has a fully-optimized software and hardware stack for deep learning,” said Intel’s Diane Bryant in a blog post. “Their IP and expertise in accelerating deep learning algorithms will expand Intel’s capabilities in the field of AI. We will apply Nervana’s software expertise to further optimize the Intel Math Kernel Library and its integration into industry standard frameworks
The recent introduction of new high end processors from Intel combined with accelerator technologies such as NVIDIA Tesla GPUs and Intel Xeon Phi provide the raw ‘industry standard’ materials to cobble together a test platform suitable for small research projects and development. When combined with open source toolkits some meaningful results can be achieved, but wide scale enterprise deployment in production environments raises the infrastructure, software and support requirements to a completely different level.
“The ExaFlash Platform is an historic achievement that will reshape the storage and data center industries,” said Thomas Isakovich, CEO and Founder of Nimbus Data. “It offers unprecedented scale (from terabytes to exabytes), record-smashing efficiency (95% lower power and 50x greater density than existing all-flash arrays), and a breakthrough price point (a fraction of the cost of existing all-flash arrays). ExaFlash brings the all-flash data center dream to reality and will help empower humankind’s innovation for decades to come.”
“Fujitsu Laboratories has newly developed parallelization technology to efficiently share data between machines, and applied it to Caffe, an open source deep learning framework widely used around the world. Fujitsu Laboratories evaluated the technology on AlexNet, where it was confirmed to have achieved learning speeds with 16 and 64 GPUs that are 14.7 and 27 times faster, respectively, than a single GPU. These are the world’s fastest processing speeds(2), representing an improvement in learning speeds of 46% for 16 GPUs and 71% for 64 GPUs.”
Today HPC cloud provider Nimbix announced a significant increase in their presence in the machine learning market space as more customers are using their JARVICE platform to help address the need for an easier, more cost efficient way of working with machine learning. “The Nimbix Cloud was a great choice for our research tasks in conversational AI. They are one of the first cloud services to provide NVIDIA Tesla K80 GPUs that were essential for computing neural networks that are implemented as part of Luka’s AI,” said Phil Dudchuck, Co-Founder at Luka.ai.
Today Netlist announced the first public demonstration of its HybriDIMM Storage Class Memory (SCM) product at the upcoming Flash Memory Summit. Using an industry standard DDR4 LRDIMM interface, HybriDIMM is the first SCM product to operate in current Intel x86 servers without BIOS and hardware changes, and the first unified DRAM-NAND solution that scales memory to terabyte storage capacities and accelerates storage to nanosecond memory speeds.
Is Machine Learning more of a Data Movement problem than a Processing problem? In this podcast, the Radio Free HPC team looks at use cases for Machine Learning where data locality is critical for performance. “Most of the Machine Learning hearing stories we hear involve a central data repository. Henry says he is not hearing enough about how Machine Learning is going to deal with the problem of massive data streams from things like sensors. Such data, he contends, will have to be processed at the source.”
Today E8 Storage launched the storage industry’s first-ever centralized, highly available rack scale flash appliance based on Non-Volatile Memory express (NVMe) drives. The E8-D24 is the first array that combines the high performance of NVMe drives, the high availability and reliability of centralized storage, and the high scalability of scale-out solutions.
“A quantum computer cannot just be created from just trapping ions, it is necessary to move the information (the ions) between different locations in a trap, for example between calculation and storage regions. Our group has developed a method which allows the means to confidently control the motion of individual ions and shuttle an ion to any position in a ion trap microchip. By developing traps that generate complex electrical fields, it is possible to push and pull the ions by varying the strength of these fields, making it possible to manipulate single ions around corners! Right now, we are in the process of developing full scale architectures that contain all the necessary features for a full scale quantum computer.”