GPUdb is a scalable, distributed database with SQL-style query capability, capable of storing Big Data. Developers using the GPUdb API add data, and query the data with operations like select, group by, and join. GPUdb includes many operations not available in other “cloud database” offerings. GPUdb applies a new (patented) concept in database design that puts emphasis on leveraging the growing trend of many-core devices. By building GPUdb from the ground up around this new concept we are able to provide a system that merges the query needs of the traditional relational database developer with the scalability demands of the modern cloud-centric enterprise.
“The Cray Urika-XA system provides customers with the benefits of a turnkey analytics appliance combined with a flexible, open platform that can be modified for future analytics workloads. Designed for customers with mission-critical analytics challenges, the Cray Urika-XA system reduces the analytics footprint and total cost of ownership with a single-platform consolidating a wide range of analytics workloads.”
Last year at the SC13 conference, Micron announced their Automata processor, a programmable silicon device capable of performing high-speed, comprehensive search and analysis of complex, unstructured data streams. Today, Micron Technology announced the availability of the software development kit (SDK) for the Automata Processor.
Today IBM introduced a new series of GPU-accelerated systems capable of handling massive amounts of computational data faster and at a nearly 20% better price performance than comparable Intel-based systems – providing clients a superior alternative to closed, commodity-based data center servers. The vastness of Big Data—of the 2.5 quintillion bytes of data generated on […]
Both large-scale environments and scale-out workloads (such as Big Data) are becoming more important in the enterprise. In fact, with the rise of Big Data, the advent of affordable, powerful clusters, and strategies that take advantage of commodity systems for scale-out applications, these days the enterprise computing environment is looking a lot like HPC.
“High Performance Computing allows scientists and engineers to solve complex science, engineering, and business problems using applications that require high bandwidth, low latency networking, and very high compute capabilities. AWS allows you to increase the speed of research by running HPC in the cloud and to reduce costs by providing Cluster Compute or Cluster GPU servers on-demand without large capital investments.”
In this episode of This Week in HPC, Michael Feldman and Addison Snell from Intersect360 Research discuss the new Fortissimo Foundation from A3Cube, a clustered, pervasive, global direct-remote I/O access system. For more details, check out our A3Cube Slidecast over at insideBIGDATA. After that, they look at Paypal’s use of TI Keystone DSP processors for systems intelligence. By analyzing their chaotic real-time server data, Paypal is getting real-time, organized, intelligent results with extreme energy efficiency using HP’s Moonshot servers.
David Beer writes that the NFL plans to equip players on the field with radio-frequency identification (RFID) tags that will provide a flood of data for tracking and simulation. “HPC is also well suited to handle the different use cases that will arise from the different kinds of data analysis that people will want to run. For example, some people may well want to develop a simulation complete with graphics to represent what happened and to show different wrinkles on how the play might be run.”
“PayPal’s novel approach is to convert events represented in a plain text format into a numeric format which can be analyzed in real-time using mathematical techniques with hardware specifically designed to operate on such numeric data. The first instantiation of this approach uses ProLiant m800 cartridges powered by TI’s 66AK2Hx DSP processor.”