Today the NSF announced major awards totaling more than $5 million to support four regional Data Hubs organized by some of the top universities and Big Data researchers in the country.
In this RCE podcast, Brock Palen and Jeff Squyres discuss Conduit with Cyrus Harriston from LLNL. Conduit is an open source project from Lawrence Livermore that provides an intuitive model for describing hierarchical scientific data in C++, C, Fortran, and Python and is used for data coupling between packages in-core, serialization, and I/O tasks.
Today SGI introduced the SGI UV 300RL for big data in-memory analytics. As a new model in the SGI UV server line certified and supported with Oracle Linux, the SGI UV 300RL provides up to 32 sockets and 24 terabytes of shared memory. The solution enables enterprises that have standardized on Intel-based servers to run Oracle Database In-Memory on a single system to help achieve real-time operations and accelerate data analytics at unprecedented scale.
Across industries, companies are beginning to watch the convergence of High-performance Computing (HPC) and Big Data. Many organizations in the Financial Services Industry (FSI) are running their financial simulations on business analytics systems, some on HPC clusters. But they have a growing problem: integrating analytics of non-structured data from sources like social media with their internal data. Learn how Lustre can help solves these challenges.
Researchers are calling for a coordinated national network of neurotechnology centers or “brain observatories.” As proposed in an Opinion paper published in the Neuron journal, the observatories would augment the BRAIN Initiative involving more than 100 laboratories in the United States has already made progress in establishing large-scale neuroscience goals and developing shared tools.
Christopher Lynnes from NASA presented this talk at the HPC User Forum. “The Earth Observing System Data and Information System is a key core capability in NASA’s Earth Science Data Systems Program. It provides end-to-end capabilities for managing NASA’s Earth science data from various sources—satellites, aircraft, field measurements, and various other programs.”
“In business and commercial computing, momentum towards cloud and big data has already built up to the point where it is unstoppable. In technical computing, the growth of the Internet of Things is pressing towards convergence of technologies, but obstacles remain, in that HPC and big data have evolved different hardware and software systems while Open Stack, the Open Source cloud computing platform, does not work well with HPC.”
XSEDE is now accepting 2016 Research Allocation Requests for the Bridges supercomputer. Available starting in January, 2016 at the Pittsburgh Supercomputing Center, Bridges represents a new concept in high performance computing: a system designed to support familiar, convenient software and environments for both traditional and non-traditional HPC users.
Today the San Diego Supercomputer Center (SDSC) announced that it has made significant upgrades to its cloud-based storage system to include a new range of computing services designed to support science-based researchers, especially those with large data requirements that preclude commercial cloud use, or who require collaboration with cloud engineers for building cloud-based services.