From Sun today Sun Microsystems, Inc…today announced it has acquired Q-layer, a cloud computing company that automates the deployment and management of both public and private clouds. The Q-layer organization, based in Belgium, will become part of Sun’s Cloud Computing business unit which develops and integrates cloud computing technologies, architectures and services. Recall that in […]
Financial Engineering Summit next week
Today is becoming “event day” here at insideHPC. I had an email earlier this week about an upcoming computational finance event in New York hosted by NVIDIA and the NYU Courant School of Mathematics. Here are some quick details The Financial Engineering Summit, covering “Derivatives, Operator Methods and GPU Computing,” will take place Jan. 12-14, […]
VTDC 2009
Just announced on the Beowulf list are the details for the third annual International Workshop on Virtualization Technologies in Distributed Computing [VTDC09]. The workshop will be held in conjunction with the International Conference on Autonomic Computing and Communications. Paper submissions are due by February 20th, 2009 on topics such as infrastructure as a serivce [IaaS], […]
Microsoft's Pay-As-You-Go Computing Policy
On Christmas Day, Microsoft published the details on a recent patent application geared toward pay-as-you-go computing. The abstract details a situation where the supply chain heavily subsidizes the initial cost of computing equipment and in-turn, charges for time and the performance of the machine. Microsoft notes that the end user could possibly end up paying […]
DCMA Deploy's Parabon Grid Platform on 10k Computers
The Defense Contracting Management Association [DCMA] has setup a licensing deal with Parabon Computation to deploy their Frontier Grid Platform on 10,000 computers. DCMA is responsible for more than 300,000 active contracts within the Department of Defense. The first test of the deployment will be using Parabon Crush, a statistical modeling application that will allow […]
Fran Berman on Managing the Data Deluge
Dr. Fran Berman, director of the San Diego Supercomputer Center [SDSC], wrote an interesting article for the December 2008 edition of Communications, the monthly magazine of the Association for Computing Machinery [ACM]. The article provides a sets out a simple guide for managing what has become known as the “data deluge.” The ‘free rider’ solution […]
UniCloud offers toolset to access Amazon's pay-per-cycle model
Found at HPCwire UniCloud enables organizations to provision and scale HPC capacity on the proven computing environment of Amazon Web Services, expanding baseline computing resources through the dynamic provisioning of capacity to meet peak demand. An extension to Univa UD’s leading UniCluster and Grid MP products, UniCloud allows organizations to establish workload policies and requirements […]
Lloyds invests in HPC, Windows inside
Management Consultancy, a UK-based business management website, posted a short article last week on a recent move by Lloyds TSB to invest in HPC to improve its risk and valuation practices. The introduction of HPC to support the bank’s global derivatives infrastructure has been accelerated as it faced increasing pressure to simulate a growing number […]
HP transforms 85 datacenters into 6, chucks 4,000 legacy apps, saves $1B
So, this isn’t really an HPC article, and I’ll keep it short. It’s the sheer scale of the accomplishment that interests me. Timothy Prickett Morgan writes at The Register about the success of HP’s new CIO and his IT transformation project: When Mott took over as HP’s CIO, the company had 85 data centers, and […]
Carr on Mathematica's HPC integration
We pointed to this back in early November when it was announced. Nick Carr’s article provides a few more details on the specifics of how the integration between Mathematica and Amazon’s compute resources happens The workflow is very simple to understand and it takes very few clicks to deploy your code in the cloud. A […]



