Today SGI announced the deployment of its largest SGI UV 300 supercomputer to date at The Genome Analysis Centre (TGAC) in the UK. As one of the largest Intel SSD for PCIe*deployments worldwide, TGAC’s new supercomputing platform gives the research Institute access to the next-generation of SGI UV technology for genomics. This will enable TGAC researchers to store, categorize and analyze more genomic data in less time for decoding living systems and answering crucial biological questions. “The combination of processor performance, memory capacity and one of the largest deployments of Intel SSD storage worldwide makes this a truly powerful computing platform for the life sciences.”
The TERATEC Forum has posted their agenda for their 11th annual meeting. The event takes place June 28-29 in Palaiseau, France. “TERATEC brings together top international experts in high performance numerical design, simulation and Big Data, making it the major event in France and in Europe in this domain.”
Today the Association for Computing Machinery’s Special Interest Group on Algorithms and Computation Theory (SIGACT) and the European Association for Theoretical Computer Science (EATCS) announced that Stephen Brookes and Peter W. O’Hearn are the recipients of the 2016 Gödel Prize for their invention of Concurrent Separation Logic.
There is still time to take advantage of Early Bird registration rates for ISC 2016. You can save over 45 percent off the on-site registration rates if you sign up by May 11. “ISC 2016 takes place June 19-23 in Frankfurt, Germany. With an expected attendance of 3,000 participants from around the world, ISC will also host 146 exhibitors from industry and academia.”
In this video from the 2016 MSST Conference, Ian Corner from CSIRO in Australia presents: A Journey to a Holistic Framework for Data-intensive Workflows. “At CSIRO, we are Australia’s national science organization and one of the largest and most diverse scientific research organizations in the world. Our research focuses on the biggest challenges facing the nation. We also manage national research infrastructure and collections.”
Parallel file systems have become the norm for HPC environments. While typically used in high end simulations, these parallel file systems can greatly affect the performance and thus the customer experience when using analytics from leading organizations such as SAS. This whitepaper is an excellent summary of how parallel file systems can enhance the workflow and insight that SAS Analytics gives.
Mark Seamans from SGI presented this talk at the HPC User Forum in Tucson. “As the trusted leader in high performance computing, SGI helps companies find answers to the world’s biggest challenges. Our commitment to innovation is unwavering and focused on delivering market leading solutions in Technical Computing, Big Data Analytics, and Petascale Storage. Our solutions provide unmatched performance, scalability and efficiency for a broad range of customers.”
“HPE Persistent Memory products deliver the performance of memory with the persistence of traditional storage. The HPE 8GB NVDIMM Module is the first offering in the HPE Persistent Memory product category. Customers are looking for offerings that enable faster business decisions and the HPE Persistent Memory portfolio delivers outstanding performance to put data to work more quickly in your business. The HPE 8GB NVDIMM Module has the resiliency you have come to expect from storage technology by utilizing higher endurance DRAM and components that help verify data is moved to non-volatile technology in the event of a power loss.”
In this video from the HPC User Forum in Tucson, Gary Paek from Intel presents: Intel’s Machine Learning Strategy. “Earlier this week, Intel announced the inception of the Intel Data Analytics Acceleration Library (Intel DAAL) open source project. Intel DAAL helps to speed up big data analysis by providing highly optimized algorithmic building blocks for all stages of data analytics (preprocessing, transformation, analysis, modeling, validation, and decision making) in batch, online, and distributed processing modes of computation.”
In this video from the 2016 GPU Technology Conference, Rich Friedrich from Hewlett Packard Enterprise describes how the company makes it easier for Data Scientists to program GPUs. “In April, HPE announced a public, open-source version of the platform called the Cognitive Computing Toolkit. Instead of relying on the traditional CPUs that power most computers, the Toolkit runs on graphics processing units (GPUs), inexpensive chips designed for video game applications.”