The NSF has awarded $5 million to a team of Indiana University Bloomington computer scientists working to improve how researchers across the sciences empower big data to solve problems.
This Week in HPC: Supercomputing Future Uncertain for NSF, and Cray and SGI Unveil Big Data Appliances
“New data sources are catalyzing new applications and services, changing the way that citizens can interact with the built environment, city government, and one another. Charlie Catlett is a Senior Computer Scientist at Argonne National Laboratory and a Senior Fellow at the Computation Institute, a joint initiative of Argonne and the University of Chicago. Within the Computation Institute, he is Director of the Urban Center for Computation and Data. Charlie will talk about how he and his colleagues are using high-performance computing, data analytics, and embedded systems to better understand and design cities.”
GPUdb is a scalable, distributed database with SQL-style query capability, capable of storing Big Data. Developers using the GPUdb API add data, and query the data with operations like select, group by, and join. GPUdb includes many operations not available in other “cloud database” offerings. GPUdb applies a new (patented) concept in database design that puts emphasis on leveraging the growing trend of many-core devices. By building GPUdb from the ground up around this new concept we are able to provide a system that merges the query needs of the traditional relational database developer with the scalability demands of the modern cloud-centric enterprise.
“The Cray Urika-XA system provides customers with the benefits of a turnkey analytics appliance combined with a flexible, open platform that can be modified for future analytics workloads. Designed for customers with mission-critical analytics challenges, the Cray Urika-XA system reduces the analytics footprint and total cost of ownership with a single-platform consolidating a wide range of analytics workloads.”
Last year at the SC13 conference, Micron announced their Automata processor, a programmable silicon device capable of performing high-speed, comprehensive search and analysis of complex, unstructured data streams. Today, Micron Technology announced the availability of the software development kit (SDK) for the Automata Processor.
Today IBM introduced a new series of GPU-accelerated systems capable of handling massive amounts of computational data faster and at a nearly 20% better price performance than comparable Intel-based systems – providing clients a superior alternative to closed, commodity-based data center servers. The vastness of Big Data—of the 2.5 quintillion bytes of data generated on […]
Both large-scale environments and scale-out workloads (such as Big Data) are becoming more important in the enterprise. In fact, with the rise of Big Data, the advent of affordable, powerful clusters, and strategies that take advantage of commodity systems for scale-out applications, these days the enterprise computing environment is looking a lot like HPC.
“High Performance Computing allows scientists and engineers to solve complex science, engineering, and business problems using applications that require high bandwidth, low latency networking, and very high compute capabilities. AWS allows you to increase the speed of research by running HPC in the cloud and to reduce costs by providing Cluster Compute or Cluster GPU servers on-demand without large capital investments.”
In this episode of This Week in HPC, Michael Feldman and Addison Snell from Intersect360 Research discuss the new Fortissimo Foundation from A3Cube, a clustered, pervasive, global direct-remote I/O access system. For more details, check out our A3Cube Slidecast over at insideBIGDATA. After that, they look at Paypal’s use of TI Keystone DSP processors for systems intelligence. By analyzing their chaotic real-time server data, Paypal is getting real-time, organized, intelligent results with extreme energy efficiency using HP’s Moonshot servers.