If the current set of Presidential candidates has you down, the Watson for President Foundation may just have an answer for you. As an independent organization not affiliated with Watson’s creator, IBM, the foundation contends that the artificial intelligence technology that won Jeopardy! would be well-suited to be the leader of the free world.
IDC has published the agenda for their next HPC User Forum. The event will take place April 11-13 in Tucson, AZ. “Don’t miss the chance to hear top experts on these high-innovation, high-growth areas of the HPC market. At this meeting, you’ll also hear about government initiatives to get ready for future-generation supercomputers, machine learning, and High Performance Data Analytics.”
Today Atos announced that the French CEA and its industrial partners at the Centre for Computing Research and Technology, CCRT, have invested in a new 1.4 petaflop Bull supercomputer. “Three times more powerful than the current computer at CCRT, the new system will be installed in the CEA’s Very Large Computing Centre in Bruyères-le-Châtel, France, mid-2016 to cover expanding industrial needs. Named COBALT, the new Intel Xeon-based supercomputer will be powered by over 32,000 compute cores and storage capacity of 2.5 Petabytes with a throughput of 60 GB/s.”
The IBM Blue Gene/Q supercomputer Mira, housed at the Argonne national laboratory Argonne Leadership Computing Facility (ACLF), is delivering new insights into the physics behind nuclear fusion, helping researchers to develop a new understanding of the electron behavior in edge plasma – a critical step to creating an efficient fusion reaction.
Sugon is one of the top HPC vendors in China. With plans to expand operations in the West, the company is once again sponsoring the ISC 2016 conference. “Sugon, used to be called Dawning, rooted from the Institute of Computing Technology of the Chinese Academy of Sciences (ICT), and was the first (and now largest) HPC local vendor in China. Since 1990, Sugon has been working on High Performance Computing, producing seven generations of HPC systems, such as Dawning I and Dawning 1000 to 6000. We have successfully supported more than 10,000 HPC projects.”
The NSF has awarded $300K to NCSA to examine effective practices in industrial HPC. Led by Principal Investigator Merle Giles, the project will identify, document, and analyze effective practices in establishing public-private partnerships between High Performance Computing (HPC) centers and industry. With the market analysis firm IDC, the project will conduct a worldwide in-depth survey of 70-80 example partnerships of HPC centers of various sizes, in the US and elsewhere, that have been involved in partnerships with the private-sector.
Today the European Consortium announced a step towards Exascale computing with the ExaNeSt project. Funded by the Horizon 2020 initiative, ExaNeSt plans to build its first straw man prototype in 2016. The Consortium consists of twelve partners, each of which has expertise in a core technology needed for innovation to reach Exascale. ExaNeSt takes the sensible, integrated approach of co-designing the hardware and software, enabling the prototype to run real-life evaluations, facilitating its scalability and maturity into this decade and beyond.
Today Bright Computing announced it has been awarded a grant of more than 1.5 million Euros by the European Commission under its Horizon 2020 program. The grant will be used for the Bright Beyond HPC program, which focuses on enhancing and scaling Bright’s industry-leading management platform for advanced IT infrastructure, including high performance computing clusters, big data clusters, and OpenStack-based private clouds.
The U.S Department of Energy has awarded a total of 80 million processor hours on Titan supercomputer to an astrophysical project based at the DOE’s Princeton Plasma Physics Laboratory (PPPL). The grants will enable researchers to study the dynamics of magnetic fields in the high-energy density plasmas that lasers create. Such plasmas can closely approximate those that occur in some astrophysical objects.
“Upgrading legacy HPC systems relies as much on the requirements of the user base as it does on the budget of the institution buying the system. There is a gamut of technology and deployment methods to choose from, and the picture is further complicated by infrastructure such as cooling equipment, storage, networking – all of which must fit into the available space. However, in most cases it is the requirements of the codes and applications being run on the system that ultimately define choice of architecture when upgrading a legacy system. In the most extreme cases, these requirements can restrict the available technology, effectively locking a HPC center into a single technology, or restricting the application of new architectures because of the added complexity associated with code modernization, or porting existing codes to new technology platforms.”