Today the Texas Advanced Computing Center (TACC) announced that its new Chameleon testbed is in full production for researchers across the country. Designed to help investigate and develop the promising future of cloud-based science, the NSF-funded Chameleon is a configurable, large-scale environment for testing and demonstrate new concepts.
Chameleon will be an important resource not only for the dynamically evolving field of cloud computing but also as an important instrument for experimental computer science,” said Chameleon primary investigator Kate Keahey, computer scientist at Argonne National Laboratory and CI Senior Fellow.
Other academic partners on the project include the International Center for Advanced Internet Research at Northwestern University, Network-Based Computing Laboratory at The Ohio State University, and the UTSA Cloud and BigData Laboratory at The University of Texas at San Antonio. The project also includes partnerships with Dell, Intel, Rackspace, and the Global Environment for Network Innovations (GENI), a virtual laboratory for networking and distributed systems research and education.
Located at the CI and TACC, the Chameleon hardware will ultimately consist of 650 cloud nodes with five petabytes of storage and a 100Gbps network between the sites. The environment allows users to test new virtualization technologies that enhance the reliability, security and performance of cloud computing.
Chameleon allows us to reach new communities of researchers that our current systems don’t serve,” said Dan Stanzione, executive director at TACC and a co-investigator. “While other TACC production systems support science that makes use of large scale computing, we’ve never had a way for researchers to experiment on the computing systems themselves. Chameleon provides a platform for computer scientists and other researchers to explore techniques and tools to make cloud computing systems and future computing platforms more effective.”
Like its namesake, Chameleon is adaptable, designed to support a variety of cloud research methods. To support users building cloud services and platforms, Chameleon includes persistent infrastructure clouds. To support researchers investigating low-level software for clouds, Chameleon provides “bare metal” provisioning of hardware where users can specify and modify the full software stack they experiment on. For researchers that want dedicated, but not fully custom environments, Chameleon provides pre-configured software stacks that are provisioned on bare metal.
With Chameleon able to support a wide variety of computer architectures, researchers can mix and match hardware, software and networking components and test their performance. This flexibility is expected to benefit many scientific communities, including the growing field of cyber-physical systems, which integrates computation into physical infrastructure to test the use of cloud computing across areas from machine learning and adaptive operating systems to climate simulations and flood prediction.
Another aspect that makes Chameleon unique is the ability of researchers to test the tradeoffs between different kinds of networks such as InfiniBand, Ethernet, cloud applications, and the upcoming OmniPath. In the future, the system will support low power processors, general processing units (GPUs) and field programmable gate arrays (FPGAs), as well as a variety of network interconnects and storage devices. The research team plans to add new capabilities in response to community demand or when innovative new products are released.
Chameleon partnerships will also include production clouds in both science and industry to help researchers understand and express problems relevant to these fields. The project will partner with existing research clouds operated by CERN and the Open Science Data Cloud, who can provide a level of openness that most private cloud centers cannot.
We are excited about Chameleon’s ability to provide resources for a very wide range of computer science experiments,” Keahey said. “Everything from Big Data to Big Compute, exploring both homogenous and heterogeneous hardware capabilities, and accommodating a wide range of user skills from research to education.”