Exploitation of parallel processing technologies for Scientific & Engineering Research will accelerate research of applications in various engineering disciplines that employ HPC techniques and facilitates research about HPC technologies contributing to capacity-building of the country.”
This presentation is an overview of the current important trends in HPC, based on the latest end-user research studies and market forecasts. Topics include accelerator adoption, the role of HPC in Big Data, and the ratio of spending between hardware, software, staffing, and facilities.”
This keynote address analyzes the most prominent challenges for designing and cost-effective interconnection networks for Exascale systems, such as topology scalability, power consumption, fault tolerance, and/or congestion control. Besides, some solutions are proposed and their implementation complexity in commercial products are estimated.”
The HPC Advisory Council will hold their 2013 European Conference on June 16th, 2013, in conjunction with the ISC’13 conference in Leipzig, Germany. The workshop will focus on HPC productivity and futures, and will bring together system managers, researchers, developers, computational scientists and industry affiliates to discuss recent developments and future advancements in supercomputing.
DDN has developed a Hadoop solution that is all about time to value: It simplifies rollout so that enterprises can get up and running more quickly, provides typical DDN performance to accelerate data processing, and reduces the amount of time needed to maintain a Hadoop solution.” said Dave Vellante, Chief Research Officer, Wikibon.org. “For enterprises with a deluge of data but a limited IT budget, the DDN hScaler appliance should be on the short list of potential solutions.”
Depending on the application of the user’s system, it may be necessary to modify the default configuration of the network adapters and the system/chipset configuration. This slide deck describes common tuning parameters, settings & procedures that can improve performance of the network adapter. Different Server & NIC vendors may have different recommendations for the values to be set – but the general tuning approach should be similar. For the hands-on demo we will utilize Mellanox ConnectX adapters – thus we will implement the recommended settings issued by Mellanox.
Part of the ClusterStor family, ClusterStor 6000 is designed to support installations with linear performance scalability in less space, scaling from up to 6 gigabytes per second to installations providing 1 terabyte per second file system throughput, as well as linear data storage capacity from terabytes up to tens of petabytes.
Designing a large scale, high performance storage system presents significant challenges. This paper describes a step-by-step approach to designing a storage system and presents a design methodology based on an iterative approach that applies at both the component level and the overall system level. The paper includes a detailed case study in which a Lustre storage system is designed using the approach and methodology presented.
In this video from the HPC Advisory Council Switzerland Conference, Norbert Eicker from Jülich Supercomputing Centre presents: The DEEP Project.
The project DEEP will develop a novel, Exascale-enabling supercomputing platform along with the optimisation of a set of grand-challenge applications highly relevant for Europe’s science, industry and society. The DEEP System will realise a Cluster Booster Architecture that will serve as proof-of-concept for a next-generation 100 PFlop/s production system.”