“Micron is delivering on the promise of persistent memory with a solution that gives system architects a new approach for designing systems with better performance, reduced energy usage and improved total cost of ownership,” said Tom Eby, vice president for Micron’s compute and networking business unit. “With NVDIMM, we have a powerful solution that is available today. We’re also leading the way on future persistent memory development by spearheading R&D efforts on promising new technologies such as 3D XPoint memory, which will be available in 2016 and beyond.”
“The artificial intelligence race is on,” said Jen-Hsun Huang, co-founder and CEO of NVIDIA. “Machine learning is unquestionably one of the most important developments in computing today, on the scale of the PC, the internet and cloud computing. Industries ranging from consumer cloud services, automotive and health care are being revolutionized as we speak. Machine learning is the grand computational challenge of our generation. We created the Tesla Hyperscale Accelerator line to give machine learning a 10X boost. The time and cost savings to data centers will be significant.”
Researchers at TACC have developed a new tool to analyze bigger datasets using supercomputers and machine learning algorithms. Called NOMAD, the tool can pull insights from data much faster than other current state-of-the-art tools. It is also able to explore datasets, including some of the largest available, that break other leading software.
Today Bright Computing announced plans to release several updates and enhancements to its most popular management software solutions at SC15, which takes place November 15-20, 2015, in Austin, Texas. The updates will include more than a dozen features that simplify OpenStack deployment, add built-in integration functionality, and greatly improve Big Data integration, including support for the latest releases from Apache, Cloudera, Hortonworks, and Pivotal.
“Deep neural networks are increasingly important for powering AI-based applications like speech recognition. Baidu’s research shows that adding GPUs to the data center makes deploying big deep neural networks practical at scale. Deep learning based technologies benefit from batching user requests in the data center, which requires a different software architecture than traditional web applications.”
In this special guest feature, Tom Wilkie from Scientific Computing World reports that the European Commission is funding research projects and centers of excellence as part of its strategy to coordinate European HPC efforts. In October, the EC made a series of announcements on how it is going to invest some of the €700 million allocated to its Public-Private Partnership on high performance computing.
Today SGI introduced the SGI UV 300RL for big data in-memory analytics. As a new model in the SGI UV server line certified and supported with Oracle Linux, the SGI UV 300RL provides up to 32 sockets and 24 terabytes of shared memory. The solution enables enterprises that have standardized on Intel-based servers to run Oracle Database In-Memory on a single system to help achieve real-time operations and accelerate data analytics at unprecedented scale.