NextIO is a company with a cool offering that allows you to pool your network ports, disk drives, GPUs, and other IO devices into a box that can be connected via PCI Express to multiple servers. As the company says, this lets you separate decisions about IO from decisions about compute. A use case? Rather than adding GPUs to just a few servers in your small cluster you could add them in a NextIO device that would make them available to all the servers. The point is not that all the servers would use them simultaneously (that would be bad), but rather that you wouldn’t have to decide a priori which servers would get the GPUS, allowing you to avoid idle resources held “just in case” a GPU user showed up.
This week they announced a partnership with IBM that puts their gear into IBM clusters for users that can take advantage of it
NextIO, a premier provider of next-generation I/O solutions, today announced it is working with IBM to offer customers integrated cluster solutions that incorporate NextIO technology, with availability in 2010. The solution will enable reconfigurable and on demand GPU compute capabilities for IBM iDataplex customers. The announcement was made at the Supercomputing 2009 show in Portland, Ore.
…The GPU virtualization solution will offer the ability for a single IBM iDataPlex™ server to access from one to eight double-wide GPUs or up to 16 single-wide GPUs in the appliance. Users can quickly and easily enable more or less GPU resources on demand, depending on application requirements. Each iDataPlex rack supports 10 GPU appliances providing up to 160 GPUs and over 80TFlops of compute processing per rack.