In this Graybeards Podcast, Molly Rector from DDN describes how HPC storage technologies are mainstreaming into the enterprise space. “In HPC there are 1000s of compute cores that are crunching on PB of data. For Oil&Gas companies, it’s seismic and wellhead analysis; with bio-informatics it’s genomic/proteomic analysis; and with financial services, it’s economic modeling/backtesting trading strategies. For today’s enterprises such as retailers, it’s customer activity analytics; for manufacturers, it’s machine sensor/log analysis; and for banks/financial institutions, it’s credit/financial viability assessments. Enterprise IT might not have 1000s of cores at their disposal just yet, but it’s not far off. Molly thinks one way to help enterprise IT is to provide a SuperComputer as a service (ScaaS?) offering, where top 10 supercomputers can be rented out by the hour, sort of like a supercomputing compute/data cloud.”
Today, SGI and Hewlett Packard Enterprise announced an agreement in which HPE will OEM the SGI UV technology as the foundation for an 8-socket system – the HPE Integrity MC990 X Server. Extending HPE’s solution portfolio for mission critical environments, including HPE’s flagship mission critical solution Superdome X, the new system leverages the scale-up architecture of the SGI UV technology and provides HPE customers with an advanced follow on solution to the 8-socket HPE ProLiant DL980 G7 Server. Through this partnership with SGI, HPE will address time-to-market demands while meeting the performance, scalability and availability requirements of enterprise customers.
IDC has published the agenda for their next HPC User Forum. The event will take place April 11-13 in Tucson, AZ. “Don’t miss the chance to hear top experts on these high-innovation, high-growth areas of the HPC market. At this meeting, you’ll also hear about government initiatives to get ready for future-generation supercomputers, machine learning, and High Performance Data Analytics.”
“If you think of a data mart as a store of bottled water – cleansed and packaged and structured for easy consumption – the data lake is a large body of water in a more natural state. The contents of the data lake stream in from a source to fill the lake, and various users of the lake can come to examine, dive in, or take samples.” These “data lake” systems will hold massive amounts of data and be accessible through file and web interfaces. Data protection for data lakes will consist of replicas and will not require backup since the data is not updated. Erasure coding will be used to protect large data sets and enable fast recovery. Open source will be used to reduce licensing costs and compute systems will be optimized for map reduce analytics. Automated tiering will be employed for performance and long-term retention requirements. Cold storage, storage that will not require power for long-term retention, will be introduced in the form of tape or optical media.”
Registration is now open for the inaugural Nimbix Developer Summit. With an impressive lineup of speakers & sponsors from Mellanox, migenius, Xilinx, and more, the event takes place March 15 in Dallas, Texas. “The summit agenda will feature topics such as hardware acceleration, coprocessing, photorealistic rendering, bioinformatics, and high performance analytics. The sessions will conclude with a panel of developers discussing how to overcome challenges of creating and optimizing cloud-based applications.”
Today DDN announced that its WOS 360 v2.0 object storage software was named a Visionary Product in the Professional Class Storage category at the fifteenth Annual Storage Visions Conference. The groundbreaking WOS enables organizations to build highly reliable, infinitely scalable and cost-efficient storage repositories to meet any unstructured data need and the most demanding storage requirements. With massively scalable storage technology that is able to outpace the performance requirements and growth of Enterprise Big Data, DDN continues to lead the market with revolutionary products that solve the end-to-end data lifecycle from cache and SSD to high performance file storage, cloud and archive.
Today European datacenter specialist DATA4 Group and Qarnot Computing announced a new type of distributed computing system that offers “greener and more efficient computing services.” The system is based on Qarnot’s Q.rad, a smart and connected digital heater. “Think of the device as the fusion of an electrical heater and a server. In the Q.rad model of computing, servers are placed in rooms that need heat. They are then networked together to form a physically distributed cloud computing resource.”
“Upgrading legacy HPC systems relies as much on the requirements of the user base as it does on the budget of the institution buying the system. There is a gamut of technology and deployment methods to choose from, and the picture is further complicated by infrastructure such as cooling equipment, storage, networking – all of which must fit into the available space. However, in most cases it is the requirements of the codes and applications being run on the system that ultimately define choice of architecture when upgrading a legacy system. In the most extreme cases, these requirements can restrict the available technology, effectively locking a HPC center into a single technology, or restricting the application of new architectures because of the added complexity associated with code modernization, or porting existing codes to new technology platforms.”
Today Centerprise International (Ci) in the UK announced a collaboration with E4 Computer Engineering to develop next-generation datacenter technologies for HPC. “This is an exciting development for both companies, as it combines the specialist knowledge of E4 in the field of high performance computing with our considerable experience in building quality, customized hardware solutions and our expansive reach in the UK IT channel,” said Jeremy Nash, Centerprise Sales Director.”
In this Intel Chip Chat podcast, Dan Ferber, Open Source Server Based Storage Technologist at Intel and Ross Turk, Director of Product Marketing for Red Hat describe how Ceph plays a critical role in delivering the full enterprise capability of OpenStack. Ross explains how Ceph allows you to build storage using open source software and standard servers and disks providing a lot of flexibility and enabling you to easily scale out storage. By lowering hardware costs, lowering the vendor lock-in threshold, and enabling customers to fix and enhance their own code, open source and software defined storage (SDS) solutions are enabling the future of next generation storage.