Sign up for our newsletter and get the latest HPC news and analysis.

Mizuho Securities Speeds Derivatives 30x with Intel Xeon Phi

Mizuho

Mizuho Securities reports that they are the first financial institution to deploy the Intel Xeon Phi coprocessor in a production environment.

Interview: Xcelerit Partners with Intel to Accelerate Code in Supercomputing

Hicham Lahlou

“Our core product is the Xcelerit SDK, a Software Development Kit that makes it easy for domain specialists (i.e. mathematicians in banks or geophysicists in energy exploration firms) to convert their existing code to take advantage of multi-core, GPU and other hardware accelerators.”

Video: Parallelism or Paralysis for Analysis?

Robert Geva

“How can capital markets firms handle the computational challenges presented by regulatory mandates and big data? Chances are the solution will involve high-performance computing powered by parallelism, or the ability to leverage multiple hardware resources to run code simultaneously. But while hardware architectures have been moving in that direction for years, many firms’ software isn’t written to take advantage of multiple threads of execution.”

From GPU Computing Toward Full HPC In Finance with GPUs

Pierre Spatz, Murex

“During the previous GTC, Murex has shown how the company had adapted their generic Monte-Carlo & PDE codes compatible with a payoff language. With one more year of experience with GPUs and OpenCL Murex will show how the company has broadened the usage of GPUs for other subjects like vanilla screening or model calibration and focus on their new challenge: use as many GPUs as possible for one single computation.”

How European HPC Shops Can Unleash True Performance

Bull Logo

Your HPC environment is critical to helping you quantify data and correlate results — now find out how to unleash the true performance capabilities of your environment!

Interview: Transtec Solutions for HPC and Big Data Workflows

97f5cddc6.6604006,2.140x185

“One of the hottest topics we see is remote visualization for post-processing simulation results. One big issue in traditional workflows in technical and scientific computing is the transfer of large amounts of data from where these have been created to where they are analyzed. Streamlining this workflow by processing where the data have been created in the first place is tantamount to shorten the wall-clock time it takes end users to get final results. At the same time, hardware utilization is greatly enhanced by using innovative technology for remote 3D visualization. For this, we have long since entered into a strategic partnership with NICE.”

New Eurotech G-Station Deskside HPC system

eurotech

Today Eurotech announced their new G-Station, a liquid-cooled departmental HPC system.

Interview: Terascala and High Performance Data Movement

0623-Mover-butler

“Terascala’s intelligent operating system, TeraOS, simplifies managing Lustre®-based storage and optimizes workflows, providing the high throughput storage HPC users need to solve bigger problems faster. For the HPC folks, this means that Terascala-powered storage appliances can reduce run times to hours instead of days or weeks.”

“The Decade of Sensing” Comes to Oil & Gas

Breunig resize

Over at Rice University, Patrick Kurp writes that the recent Rice Oil & Gas HPC Workshop drew more than 500 attendees from industry and academia.

Featured White Paper: Submerged Servers in Your Data Center

submerged servers

Although many initially thought that liquid and servers should probably never mix – what if the server cooling is done in a completely controlled and secured environment? Liquid submersion cooling has the potential to revolutionize the design, construction, and energy consumption of data centers around the world.