In what has to be one of the most beautiful simulations I’ve ever seen, this video from the European Space Agency shows simulated interaction of solar winds with 67P/Churyumov-Gerasimenko, the famous comet targeted the Rosetta mission. “The simulated conditions represent those expected at 1.3 AU from the Sun, close to perihelion, where the comet is strongly active.”
Early Bird registration rates are now available for ISC Cloud & Big Data Conference, which takes place Sept. 28-30 in Frankfurt, Germany. This year the event will kick off with one full day of workshops. The new program will highlight performance demanding cloud and big data applications and technologies and will consist of three tracks: Business, Technology and Research.
Over at NERSC, Linda Vu writes that the SciDB open source database system is a powerful tool for helping scientists wrangle Big Data. “SciDB is an open source database system designed to store and analyze extremely large array-structured data—like pictures from light sources and telescopes, time-series data collected from sensors, spectral data produced by spectrometers and spectrographs, and graph-like structures that illustrate relationships between entities.”
“Sea level rise is one of the most visible signatures of our changing climate, and rising seas have profound impacts on our nation, our economy and all of humanity,” said Michael Freilich, director of NASA’s Earth Science Division. “By combining space-borne direct measurements of sea level with a host of other measurements from satellites and sensors in the oceans themselves, NASA scientists are not only tracking changes in ocean heights but are also determining the reasons for those changes.”
Today Intel Corporation and BlueData announced a broad strategic technology and business collaboration, as well as an additional equity investment in BlueData from Intel Capital. BlueData is a Silicon Valley startup that makes it easier for companies to install Big Data infrastructure, such as Apache Hadoop and Spark, in their own data centers or in the cloud.
Geert Wenes writes in the Cray Blog that the next generation of Grand Challenges will focus on critical workflows for Exascale. “For every historical HPC grand challenge application, there is now a critical dependency on a series of other processing and analysis steps, data movement and communications that goes well beyond the pre- and post-processing of yore. It is iterative, sometimes synchronous (in situ) and generally more on an equal footing with the “main” application.”
“Supercomputing should be available for everyone who wants it. With that mission in mind, a team of engineers created Parallella, an 18-core supercomputer that’s a little bigger than a credit card. Parallella is open source hardware; the circuit diagrams are on GitHub and the machine runs Linux. Icing on the cake: Parallella is the most energy efficient computer on the planet, and you can buy one for a hundred bucks. Why does parallel computing matter? How can developers use parallel computing to deliver better results for clients? Let’s explore these questions together.”
“Within the next 12 months, China expects to be operating not one but two 100 Petaflop computers, each containing (different) Chinese-made processors, and both coming on stream about a year before the United States’ 100 Petaflop machines being developed under the Coral initiative. Ironically, the CPU for one machine appears very similar to a technology abandoned by the USA in 2007, and the US Government, through its export embargo, has encouraged China to develop its own accelerator for the other machine.”