“During the previous GTC, Murex has shown how the company had adapted their generic Monte-Carlo & PDE codes compatible with a payoff language. With one more year of experience with GPUs and OpenCL Murex will show how the company has broadened the usage of GPUs for other subjects like vanilla screening or model calibration and focus on their new challenge: use as many GPUs as possible for one single computation.”
“One of the hottest topics we see is remote visualization for post-processing simulation results. One big issue in traditional workflows in technical and scientific computing is the transfer of large amounts of data from where these have been created to where they are analyzed. Streamlining this workflow by processing where the data have been created in the first place is tantamount to shorten the wall-clock time it takes end users to get final results. At the same time, hardware utilization is greatly enhanced by using innovative technology for remote 3D visualization. For this, we have long since entered into a strategic partnership with NICE.”
“Terascala’s intelligent operating system, TeraOS, simplifies managing Lustre®-based storage and optimizes workflows, providing the high throughput storage HPC users need to solve bigger problems faster. For the HPC folks, this means that Terascala-powered storage appliances can reduce run times to hours instead of days or weeks.”
Although many initially thought that liquid and servers should probably never mix – what if the server cooling is done in a completely controlled and secured environment? Liquid submersion cooling has the potential to revolutionize the design, construction, and energy consumption of data centers around the world.
“The main topics for our April 7-9 meeting in Santa Fe are industrial partnerships with large HPC centers and how they’re working, with perspectives from the U.S., France and the UK. We’ll also take another hard look at what’s happening with processors, coprocessors and accelerators and at potential disruptive technologies, as well as zeroing in on the HPC storage market and trends and the CORAL procurement that involves Oak Ridge, Argonne and Livermore.”
In this whitepaper from Adaptive Computing – we learn about the new concept around Big Workflow and how it directly addresses the needs of critical, data-intensive, applications. By creating more intelligence around data control, Big Workflow directly provides a way for big data, HPC, and cloud environments to interoperate, and do so dynamically based on what applications are running.