In this Chip Chat podcast, Diane Bryant, EVP/GM for the Data Center Group at Intel, discusses how the company is driving the future of artificial intelligence by delivering breakthrough performance from best-in-class silicon, democratizing access to technology, and fostering beneficial uses of AI. Bryant also outlines her vision for AI’s ability to fundamentally transform the way businesses operate and people engage with the world. In a blog Krzanich said: “Intel is uniquely capable of enabling and accelerating the promise of AI. Intel is committed to AI and is making major investments in technology and developer resources to advance AI for business and society.”
Applications such as machine learning and deep learning require incredible compute power, and these are becoming more crucial to daily life every day. These applications help provide artificial intelligence for self-driving cars, climate prediction, drugs that treat today’s worst diseases, plus other solutions to more of our world’s most important challenges. There is a multitude of ways to increase compute power but one of the easiest is to use the most powerful GPUs.
In this Intel Chip Chat, Dr. Figen Ulgen from Intel discusses artificial intelligence workloads that are emerging as a result of greater access to high performance computing. “Noting that “wherever there is computational complexity, HPC can help,” Dr. Ulgen talks about the ways that technologies like voice recognition and natural language processing are growing more sophisticated as compute power increases. Dr. Ulgen additionally highlights Intel’s work with the OpenHPC-based Intel HPC Orchestrator, which promises to be an important step forward in making HPC more accessible to a broader array of customers.”
A survey conducted by insideHPC and Gabriel Consulting in Q4 of 2105 indicated that nearly 45% of HPC and large enterprise customers would spend more on system interconnects and I/O in 2016, with 40% maintaining spending at the same level as the prior year. For manufacturing, the largest subset representing approximately one third of the respondents, over 60% were planning to spend more and almost 30% maintaining the same level of spending going into 2016 implying the critical value of high performance interconnects.
The HPC Advisory Council Stanford Conference has issued its Call for Papers and Presentations. The event takes place Feb. 7-8 in Palo Alto, CA. “We invite submissions introducing a wide range of topics, levels and considerations in HPC architectures, applications and usage – from fundamentals to the latest advances and hot topic areas. Submissions can be proposed as papers or presentation only (without papers).”
Today Mellanox announced that one of China’s leading intelligent speech and language technologies’ companies, iFLYTEK, has chosen Mellanox’s end-to-end 25G and 100G Ethernet solutions based on ConnectX adapters and Spectrum switches for their next generation machine learning center. The partnership between Mellanox and iFLYTEK will enable iFLYTEK to achieve a high speech recognition rate of 97 percent.
The International Conference on Massive Storage Systems and Technology (MSST 2017) has issued its Call for Participation. The event takes place May 15-19, 2017 in Santa Clara, California. “MSST 2017 will dedicate five days to computer-storage technology, including a day of tutorials, two days of invited papers, two days of peer-reviewed research papers, and a vendor exposition.”
In this video, Bill Mannel, VP & GM, High-Performance Computing and Big Data, HPE & Dr. Eng Lim GoH, PhD, SVP & CTO of SGI join Dave Vellante & Paul Gillin at HPE Discover 2016. “The combined HPE and SGI portfolio, including a comprehensive services capability, will support private and public sector customers seeking larger high-performance computing installations, including U.S. federal agencies as well as enterprises looking to leverage high-performance computing for business insights and a competitive edge.”
“New Radeon Instinct accelerators will offer organizations powerful GPU-based solutions for deep learning inference and training. Along with the new hardware offerings, AMD announced MIOpen, a free, open-source library for GPU accelerators intended to enable high-performance machine intelligence implementations, and new, optimized deep learning frameworks on AMD’s ROCm software to build the foundation of the next evolution of machine intelligence workloads.”
In this podcast, the Radio Free HPC team looks at the future of Operating Systems in the new world of computing. In a world that seems to be moving to the cloud and microservices, what will happen to the monolithic OS we have come to know and love?