MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

NSF Funds Regional Data Hubs

Storm surge visualization of Hurricane Joaquin. Several BD Hubs will focus on natural hazards.

Today the NSF announced major awards totaling more than $5 million to support four regional Data Hubs organized by some of the top universities and Big Data researchers in the country.

Rogue Wave Software CodeDynamics Expands the Reach of Multithreaded Debugging


Today Rogue Wave Software announced CodeDynamics, the next generation of dynamic analysis for data-intensive commercial applications. CodeDynamics expands the reach of multithreaded debugging from high performance computing environment into the commercial market.

RCE Podcast on the Conduit Model for Hierarchical Scientific Data

Cyrus D Harrison, LLNL

In this RCE podcast, Brock Palen and Jeff Squyres discuss Conduit with Cyrus Harriston from LLNL. Conduit is an open source project from Lawrence Livermore that provides an intuitive model for describing hierarchical scientific data in C++, C, Fortran, and Python and is used for data coupling between packages in-core, serialization, and I/O tasks.

SGI UV 300RL Enables Real-time Analytics with Oracle Database In-Memory


Today SGI introduced the SGI UV 300RL for big data in-memory analytics. As a new model in the SGI UV server line certified and supported with Oracle Linux, the SGI UV 300RL provides up to 32 sockets and 24 terabytes of shared memory. The solution enables enterprises that have standardized on Intel-based servers to run Oracle Database In-Memory on a single system to help achieve real-time operations and accelerate data analytics at unprecedented scale.

Lustre* Accelerates the Convergence of Big Data and HPC in Financial Services

HPC BIGDATA Convergence

Across industries, companies are beginning to watch the convergence of High-performance Computing (HPC) and Big Data. Many organizations in the Financial Services Industry (FSI) are running their financial simulations on business analytics systems, some on HPC clusters. But they have a growing problem: integrating analytics of non-structured data from sources like social media with their internal data. Learn how Lustre can help solves these challenges.

Researchers Propose “Brain Observatory” Neurotechnology Centers

Scanning of a human brain by X-rays. 3d image.

Researchers are calling for a coordinated national network of neurotechnology centers or “brain observatories.” As proposed in an Opinion paper published in the Neuron journal, the observatories would augment the BRAIN Initiative involving more than 100 laboratories in the United States has already made progress in establishing large-scale neuroscience goals and developing shared tools.

Evolution of NASA Earth Science Data Systems in the Era of Big Data


Christopher Lynnes from NASA presented this talk at the HPC User Forum. “The Earth Observing System Data and Information System is a key core capability in NASA’s Earth Science Data Systems Program. It provides end-to-end capabilities for managing NASA’s Earth science data from various sources—satellites, aircraft, field measurements, and various other programs.”

Scientific Cloud Computing Lags Behind the Enterprise


“In business and commercial computing, momentum towards cloud and big data has already built up to the point where it is unstoppable. In technical computing, the growth of the Internet of Things is pressing towards convergence of technologies, but obstacles remain, in that HPC and big data have evolved different hardware and software systems while Open Stack, the Open Source cloud computing platform, does not work well with HPC.”

Submit Your 2016 Research Allocation Requests for the Bridges Supercomputer


XSEDE is now accepting 2016 Research Allocation Requests for the Bridges supercomputer. Available starting in January, 2016 at the Pittsburgh Supercomputing Center, Bridges represents a new concept in high performance computing: a system designed to support familiar, convenient software and environments for both traditional and non-traditional HPC users.

SDSC Steps up with Upgraded Cloud and Storage Services

The reliable and scalable architecture of the SDSC Cloud was designed for researchers and departments as a low cost and efficient alternative to public cloud service providers.  Image: Kevin Coakley, SDSC

Today the San Diego Supercomputer Center (SDSC) announced that it has made significant upgrades to its cloud-based storage system to include a new range of computing services designed to support science-based researchers, especially those with large data requirements that preclude commercial cloud use, or who require collaboration with cloud engineers for building cloud-based services.