In this video from the 2013 National HPCC Conference, Rich Brueckner from inside-BigData moderates a panel discussion on How to Talk to Your CFO about HPC and Big Data.
John C. Morris – Pfizer
Dr. George Ball – Raytheon
Henry Tufo – University of Colorado, Boulder
Dr. Flavio Villanustre – LexisNexis
As members of the HPC community, we spend a good share our time sharing our work and best practices with our colleagues. But how do we communicate the business value of high performance computing and Big Data analytics to CFOs who have little affinity to discussions of things like cores, Hadoop, and MPI? In this panel discussion, experts and Big Data and HPC will come together to share best practices and communication strategies that have proven effective when talking to CFOs and other C-level executives.”
How will enormous data sets and an endless stream of ever-more granular variables drive supercomputing in the coming years? Will it be like a dust storm that buries us, or flood waters we can redirect and manage? How will it alter the evolution of architecture and subsystems? How will it change computer science education, development tools and job descriptions? And will gargantuan data form a barrier to our evolution to Exascale and beyond by sapping the shrinking resources for funding and creativity?
Twenty years ago General Motors reestablished manufacturing in China and ever since the complex world of global manufacturing has continued to accelerate. What often gets lost in this race is the need for the supply chain to keep pace. The world’s supply chain is comprised of 2,000,000 small and medium sized manufacturers (fewer than 500 employees) which are being tasked to keep up or shut down. At the same time these critical suppliers are being asked for more innovations, quicker and at lower costs. Unfortunately they do not have the technology required to keep pace. This presentation will present the complexity of the challenge ahead for all manufacturers, the opportunity available to those willing to collaboratively develop a solution and the current efforts at NCMS within their Digital Manufacturing Initiative.
HPCC Systems from LexisNexis Risk Solutions works with clients in various industries to manage different types of risk by helping them derive insight from massive data sets. To do this, we have developed our High Performance Computing Cluster (HPCC) technology, making it possible to process and analyze complex, massive data sets in a matter of seconds.
In this this lively panel discussion, moderator Addison Snell asks visionary leaders from the supercomputing community to comment on forward-looking trends that will shape the industry this year and beyond.”
In this video from the 2013 National HPCC Conference, Dr. Thomas Sterling from Indiana University presents: Towards the Exascale Target – the Arrow in Flight.
The preceding year has witnessed a strong impetus towards the ultimate US achievement of practical exascale computing through the initiation of research and development programs. There are two trajectories in flight toward this ambitious target in the US, both guided principally by the DOE through an important partnership between the NNSA and the OS/ASCR. One, the incremental flight path through a series of successive petascale systems, is building on step-wise extensions of conventional practices for low risk and minimal disruption to ensure continued US capability growth in performing mission-critical applications throughout the remainder of this decade. The second, the advanced course of revolutionary research, has in its target cross hairs a truly general purpose and easily programmed class of exascale systems. This strategy is enabled by a set of principles that transform a once static methodology to a dynamic adaptive paradigm to advance efficiency and scalability while exhibiting a far more programmable user interface. This presentation will review the important strides made over the last year and describe the significant accomplishments that have been achieved under the guidance of the DOE leadership in establishing key programs that will maintain US competitiveness internationally and leadership at home.
The internet, sensors and high performance computing are some of the top Big Data producers. Recently, there has been increased focus on extracting more value out of these generated data. Analysis of Big Data sets may be simplified as “looking for needle in a haystack” on one end of a spectrum to “looking for relationships between hay in a stack” on the other. We will discuss the architectural platforms and tools suitable for different parts of this spectrum.”
In this video from the 2013 National HPCC Conference, Don Lamb from the University of Chicago presents: Scientific Discovery Through HPC Simulations of High Energy Density Physics Experiments.
The availability of multi-petaflop computers and the advent of high-power laser systems have created new opportunities to explore the properties of matter at high temperatures, densities, and pressures. This research in high energy density physics is of great importance for astrophysics, material science, and inertial confinement fusion, among other fields. As an example, the generation of magnetic fields by asymmetric shocks is widely invoked as the source of cosmic magnetic fields and is of high interest in inertial confinement fusion. As another example, amplification of magnetic fields by turbulence is a “holy grail” of both computer simulations and laboratory experiments. In this talk, I describe large, multi-scale, multi-physics simulations validated by high-powered laser experiments that are providing new insights into these fundamental physical processes.”
NASA’s observations from Earth-orbiting satellites and outputs from computational climate models have contributed to one of the most data-intensive scientific disciplines today. The Earth system science tries to analyze the data, turns the data into information, makes sense of the information into knowledge and wisdom, utilize the knowledge and wisdom in decision making processes. In every step of the data life cycle workflow (i.e. curation, discovery, access, and analysis), NASA faces tremendous challenges.”
I’ll be talking about how we leveraged a team of folks in your industry to make it happen, and how we had to change everybody’s behavior and expectations to get it done, and how the results turned the bicycle wheel business upside down and gave our company an 2 year head start on an entire new product feature set.
Big data science emerges as a new paradigm for scientific discovery that reflects the increasing value of observational, experimental and computer-generated data in virtually all domains, from physics to the humanities and social sciences. Addressing this new paradigm, the EUDAT project is a European data initiative that brings together a unique consortium of 25 partners — including research communities, national data and high performance computing (HPC) centers, technology providers, and funding agencies — from 13 countries. EUDAT aims to build a sustainable cross-disciplinary and cross-national data infrastructure that provides a set of shared services for accessing and preserving research data. The design and deployment of these services is being coordinated by multi-disciplinary task forces comprising representatives from research communities and data centers.”
Our own version of March Madness begins this week with news coverage of three back-to-back high performance computing events. We’ll bring you on-site interviews, presentations, and more from the following conferences:
HPC Advisory Council Switzerland Workshop. First up this week, we’re headed to beautiful Lugano for the annual three-day workshop. The have a great agenda lined up with talks on HPC essentials, new, emerging technologies, best practices, and hands-on training. Of course, we’ll bring you videos of as many of the presentations as we can right here on insideHPC!
GPU Technology Conference. GTC is the place to learn about and share how advances in GPU technology help scientists, developers, graphic artists, designers, researchers, engineers, and IT managers tackle their day-to-day computational and graphics challenges. At insideHPC, we’ll be featuring exclusive live-stream sessions from the conference, so tune-in right here starting Tuesday, March 19 at 9:00 am Pacific Time.
National HPCC Conference. One of the oldest conferences in HPC continues in Newport March 26-28 with its unique blend of education, training, networking, and partnership building. We’ll be taping key sessions on high performance computing topics with a focus on Big Data and Digital Manufacturing. Register now at: www.hpcc-usa.org
Our travel schedule is filling up for April as well, so check out what we have in store at our Featured Events page. Viva HPC!
In this video, Steve Lyness from Appro presents: Appro Supercomputer Solutions.
To survive in an ever-changing global environment, creating and delivering innovative products and services are what give any business the competitive edge in today’s global markets. In this presentation, you will learn how Appro, a US based High Performance Computing company met the supercomputing requirements of the University Of Tsukuba Center Of Computational Sciences in Japan. Learn how reliability, availability, manageability and compatibility were essential for the successful 800TF hybrid supercomputing implementation. Learn best practices on improving data I/O performance and memory size limitations configured with Lustre™ File System to offer the best performance per dollar with excellent memory capacity per FLOP. Explore how the University of Tsukuba’s Appro Xtreme-X™ Supercomputer is accelerating large scale parallel code by combining CPU/GPU processing cluster configurations and how this implementation will be used as a pioneer for a competitive advantage for future exascale computing systems.