Registration opened today for the ISC 2016 conference, which takes place June 19-23 in Frankfurt. This year, the ISC 2016 conference program features an increased focus on Cloud, Machine Learning, and Robotics. In fact, insideHPC has learned that bulk of topics normally covered at the annual ISC Cloud conference have been absorbed into the ISC High Performance industry track. To learn more, we caught up with Wolfgang Gentzsch, a member of the ISC Steering Committee who has chaired the ISC Cloud event since its beginnings.
Scientists have developed a process to deposit nano-lasers directly onto silicon chips, paving the way for fast and efficient data processing using silicon photonics. Physicists at the Technical University of Munich (TUM) have developed a nano-laser one thousand times thinner than a human hair. This process deposits the nano-wire lasers directly onto the chip, making it possible to produce high-performance, cost-effective photonic components.
The US Department of Commerce has released details of the President’s budget request for the National Institute of Standards and Technology (NIST) in 2017 – proposing to increase spending on HPC and future computing technologies by more than 50 per cent. The total discretionary request for NIST is $1 billion, a $50.5 million increase in the enacted amount from 2016. The funding supports NIST’s research in areas such as computing, advanced communications and manufacturing.
In this special guest feature from Scientific Computing World, Andrew Jones from NAG looks ahead at what 2016 has in store for HPC and finds people, not technology, to be the most important issue. “A disconcertingly large proportion of the software used in computational science and engineering today was written for friendlier and less complex technology. An explosion of attention is needed to drag software into a state where it can effectively deliver science using future HPC platforms.”
The fastest supercomputers are built with the fastest microprocessor chips, which in turn are built upon the fastest switching technology. But, even the best semiconductors are reaching their limits as more is demanded of them. In the closing months of this year, came news of several developments that could break through silicon’s performance barrier and herald an age of smaller, faster, lower-power chips. It is possible that they could be commercially viable in the next few years.
Researchers from Zhejiang University and Hangzhou Dianzi University in China have developed the Darwin Neural Processing Unit (NPU), a neuromorphic hardware co-processor based on Spiking Neural Networks, fabricated by standard CMOS technology. “Its potential applications include intelligent hardware systems, robotics, brain-computer interfaces, and others. Since it uses spikes for information processing and transmission, similar to biological neural networks, it may be suitable for analysis and processing of biological spiking neural signals, and building brain-computer interface systems by interfacing with animal or human brains.”
In this video, Dr. Michael Karasick from IBM moderates a panel discussion on Machine Learning. “The success of cognitive computing will not be measured by Turing tests or a computer’s ability to mimic humans. It will be measured in more practical ways, like return on investment, new market opportunities, diseases cured and lives saved.”
The EMiT 2016 Emerging Technologies Conference has issued its Call for Papers. Hosted by the Mont-Blanc project and the Barcelona Supercomputing Centre, the event takes place June 2-3, 2016 in Barcelona.
“To be successful in high-performance computing (HPC) today, it is no longer enough to sell good hardware: vendors need to develop an ‘ecosystem’ in which other hardware companies use their products and components; in which system administrators are familiar with their processors and architectures; and in which developers are trained and eager to write code both for the efficient use of the system and for end-user applications. No one company, not even Intel or IBM, can achieve all of this by itself anymore.”
In this video from SC15, Rich Brueckner from insideHPC moderates a panel discussion on the NSCI initiative. “As a coordinated research, development, and deployment strategy, NSCI will draw on the strengths of departments and agencies to move the Federal government into a position that sharpens, develops, and streamlines a wide range of new 21st century applications. It is designed to advance core technologies to solve difficult computational problems and foster increased use of the new capabilities in the public and private sectors.”