Tag: supercomputer

NSF spends a fortune on cloud-based supercomputers


The National Science Foundation (NSF) has invested $16 million to build cloud-based and data-intensive advanced computing systems for the open science community

In a statement the NSF said that high performance computing (HPC) had become central to the work and progress of researchers in all fields, from genomics and ecology to medicine and education, new kinds of computing resources and more inclusive modes of interaction are required.

It has splashed out on two new supercomputing acquisitions for the open science community that it says will complement existing resources with capabilities that allow advanced computing to be available to a broader portfolio of emerging scientific frontiers and communities. The new resources are anticipated to come online in early 2016.

The “Bridges” system will be housed at the Pittsburgh Supercomputing Center (PSC) and the “Jetstream” computer will be  co-located at the Indiana University Pervasive Technology Institute (PTI) and The University of Texas at Austin’s Texas Advanced Computing Center (TACC).

Irene Qualters, division director for Advanced Cyberinfrastructure at NSF  said that Bridges and Jetstream will expand the capabilities of the NSF-supported computational infrastructure, pushing the frontiers of science forward in the life sciences, the social sciences and other emerging computational fields by exploiting interactive and cloud systems.

“Bridges and Jetstream offer a mix of new capabilities and usage modalities, from large memory nodes to virtualization technologies that allow a PC-like experience via the cloud. Together, these technologies will let a broader swath of researchers use advancing computing while making new kinds of scientific inquiry possible.”

A $9.6-million NSF grant will fund the acquisition of the system, to begin in November 2014, with a target production date of January 2016.

Jetstream–led by Indiana University’s Pervasive Technology Institute (PTI)–will add cloud-based computation to the national cyberinfrastructure. Researchers will be able to create virtual machines on the remote resource that look and feel like their lab workstation or home machine, but are able to harness thousands of times the computing power.

Craig Stewart, PTI executive director and associate dean for research technologies at Indiana University said that the new Jetstream cloud system will operate at the border between the existing NSF-funded cyberinfrastructure and thousands of researchers and research students who will be new to use of NSF XD program resources. Jetstream will give researchers access to cloud computing and data analysis resources interactively, when they need them.

Jetstream is supported by a $6.6-million NSF grant and will go into production in January 2016.

US splashes out on two more supercomputers

15013The US is going to spend $325 million on two new supercomputers, one of which may eventually be built to support speeds of up to 300 petaflops.

Deeply embarrassed by the fact that China has been ruling the super computer league tables for a while now, the US government is taking steps to unseat them from the top.

The US Department of Energy, the major funder of supercomputers used for scientific research, wants to have the two systems, each with a base speed of 150 petaflops, possibly running by 2017. Going beyond the base speed to reach 300 petaflops will take additional government approvals.

The DOE also announced another $100 million in “extreme” supercomputing research spending.

The funding was announced at a press conference at the US Capitol attended by lawmakers from both parties.

The two systems, which will be built at the DOE’s Oak Ridge and Lawrence Livermore National Laboratories, “will ensure the United States retains global leadership in supercomputing”.

Republican Chuck Fleischmann said, supercomputing was one of those things that that the US could step up and lead the world again,” he said. The Oak Ridge lab is located in his state.

Republican Bill Foster warned that the US’s technology lead is not assured and he blamed that most chip making was done over overseas.

Foster believed there is good bipartisan support for supercomputing research, but the research may face a problem if GOP budget proposals in the House slash science funding by double-digit percentages.

The US government is under pressure to abandon science funding because some constituents think it is better that people learn more about Jesus.

China has the top-ranked system, the Tianhe-2, at about 34 petaflops, and Japan and Europe have major investments underway in this area.

The new system to be built at Oak Ridge will be called the Summit. It will use about 10 megawatts of power, which is close to the power usage of Oak Ridge’s existing supercomputer, the Titan, which is ranked No. 2 in the world. The Summit will run five times faster than the Titan, despite using the same amount of power.

The new system to be built  at the Lawrence Livermore lab in California will be known as Sierra.

These systems will use IBM Power CPUs and Nvidia’s Volta GPU, the name of a chip still in development.

Ulrika ? Those were the Crays

crayCray has just built a machine with 1,500 cores, 6TB of DRAM, 38TB of SSD flash and 120TB of disk storage and named it after a Swedish weather girl from the 1990s.

Actually we are not sure if there is any link between Gladiators’ star Ulrika Jonsson and Crays’ latest supercomputer but she has not been in the news lately so we thought we would help her out.

Rather than a B list celebrity, the Ulrika XA is what is known as a single-platform entity, which mixes a range of analytic workloads that needed separate systems.

Cray said that its design has been optimised for compute and memory-intensive and latency-sensitive workloads.

Urika-XA as a turnkey, scale-out, analytics appliance and is designed for “extreme analytics” (hence XA) and described as a “pre-integrated, open platform for high-performance big data analytics”.

A single Urika-XA rack features 48 Intel Xeon compute nodes with an 800gig SSD per node, 200TB of SDD and disk storage using Sonexion 900 array, InfiniBand, Lustre parallel file system, HDFS-compatibility and POSIX compliance.

It is based around a SW stack with Cloudera Enterprise, Apache Spark, Cray Adaptive Runtime for Hadoop and Urika-XA management system.

The first buyer is the US Department of Energy’s Oak Ridge National Lab where ironically it will be looking at the impact of the weather.

The Cray says it’s coming from supercomputing land with “battle-hardened” technology which we would have thought should be “Gladiator hardened” and jolly useful when you are trying to have an affair with an English football coach without the tabloids finding out.