Latest

Supporting NOAA Now and Into the Future

Marc Cornoir headshot
GDIT-Perspectives-1200x400px-Weather2

GDIT has a long history of supporting the National Oceanic and Atmospheric Administration (NOAA), the nation’s premier science agency for our weather, climate, oceans and coasts.

NOAA generates an immense amount of data every day for use in weather models to generate forecasts that people, businesses, and the nation use to protect lives and property, as well as to enhance our economy. The GDIT team has spent decades working at the intersection of weather data and high-performance computing to transform data into actionable information.
We sat down with a high-performance computing (HPC) expert and meteorologist from the company’s HPC Center of Excellence, GDIT Systems Engineer Marc Cotnoir, to learn more about NOAA’s mission, the essential need to continually improve weather forecasts and better understand weather data, and the role of HPC in meeting that critical objective. Here’s what he had to say…

Thanks for talking with us, Marc. Can you tell Perspectives readers about your background and how you came to be one of our in-house HPC experts?

Like many in high-performance computing, I came to it from applications – in my case, mostly weather and other scientific computing applications. I first wrote code for a supercomputer as a physics undergrad at the Air Force Academy in the late 1960s (and yes, supercomputers were emerging that far back!).

After graduate school, where I worked on air pollution modeling, I started my career as an Air Force Weather Officer developing weather models. Then I moved into planning and acquisition of supercomputing and other technologies to support Air Force Weather. I have been involved in deploying supercomputers and the systems that support them ever since, for the Air Force, Navy, NASA, the Department of Energy and NOAA.

You certainly have been immersed in this domain for a long time. Can you tell us about how are weather forecasts used? It���s not just for the forecasts on our local news that might first come to mind.

There really is no segment of our economy that is not directly affected by weather, so just as you can plan whether to go to the beach based on a weather forecast, businesses ranging from agriculture to manufacturing to energy to shipping, as well as the military, face decisions that are affected by weather – and they can make better decisions if they have better forecasts. And by the way, it goes well beyond what we think of as weather – this includes space weather, ocean and freshwater modeling and climate, too.

What have been some of the biggest technology advancements that have enabled us to advance our use of supercomputing and to use weather data more proactively and effectively?

The science – and art – of weather forecasting has advanced along several fronts, one of which is definitely HPC. If you plot the improvements in forecast accuracy and the increase in the speed and capabilities of supercomputers used for what we call numerical weather prediction, you will see that they track. But to use the faster supercomputers, you also have to advance the science and you have to deploy better and better systems to collect data. Two major fields where the data collection has advanced are remote sensing from satellites and weather radar.

GDIT team members are working on projects at NOAA to provide supercomputers that support research efforts to develop better weather models and, at the other end of the spectrum, to deploy the high-performance computing needed to run those models operationally. Can you talk to us about that?

Absolutely. For 10 years, we have supported supercomputing to support NOAA research and development. These supercomputers are primarily used by NOAA laboratories to develop the next generation weather models – they do the science and write the model code. We support supercomputers and other equipment at a data center in Fairmont, West Virginia, and at NOAA laboratories in Boulder, Colorado, and Princeton, New Jersey. In our support to NOAA’s weather research initiatives, we continually upgrade these systems to provide ever-increasing compute power and we provide help-desk support to the scientists who use them.

Now, we are embarking on a new and exciting program called the Weather and Climate Operational Supercomputing System II, or WCOSS. Where our long-standing NOAA supercomputing contract serves research and development, WCOSS serves the National Weather Service’s operational supercomputing. Under this ten-year program, we will be deploying the supercomputers to run the next generation of weather models – three times more powerful than the current operational systems.

And as I mentioned earlier, faster computers give you better forecasts. We are establishing two new data centers, one in Virginia and one in Arizona, to provide the redundancy needed to meet the National Weather Service’s requirement for very high reliability, since many thousands of forecast products have to be generated by WCOSS every day and they have to be delivered on time every time. It’s a role that GDIT’s HPC Center of Excellence is ideally suited to perform, and also aligns with General Dynamics’ corporate focus on mission-critical work for the nation.

Where do you think the next HPC advancements will come from as these relate to weather modeling?

That is a really important question and one that GDIT as the industry’s leading HPC Systems Integrator is constantly assessing so we can develop forward-looking solutions for our customers across civilian and defense agencies and the intelligence community. The advancements come from many directions – obviously from the HPC vendors, but also from sources you might find surprising, like the companies that make Graphics Processing Units, we call them GPUs, for applications like video games and others that need image processing. GPUs turn out to be powerful engines for parallel floating-point computation – which is a critical capability for HPC. They are also fundamental for the emerging world of Artificial Intelligence and Machine Learning (AI/ML) which are turning up everywhere, including in weather models. So, we need to integrate AI/ML in HPC systems.

Beyond the vendors, there is a national program called the Exascale Computing Project (the ECP) that is looking to spur the development of the next generation of HPC. The National Science Foundation (NSF) and Department of Energy (DOE) are big players in ECP, as are the HPC vendors, so we closely follow the ECP, as well as similar programs in Europe and Asia. This is a very international community we work in.

Another important source of advancements in deploying HPC for weather modeling research and even operations is the cloud. NOAA is moving aggressively to harness the cloud in support of R&D and for data dissemination. Both the private sector and some weather agencies around the world are already moving operations to the cloud. In the future we will see widely distributed architectures that blend more traditional on-premises HPC with cloud-based HPC.

A last point I would make here is that the advancements that will support next generation weather models don’t come only from the world of HPC per se. Weather as a computer application uses and generates massive datasets, so everything you hear about big data is applicable here. As you know this is an area where GDIT is an industry leader.

And finally, like all big computing applications in government, academia, and the private sector, we have to care about cyber security – another focus area for GDIT. So, we are back to our role as an HPC Systems Integrator. We have to understand all these technologies and the trends taking them to the future, and integrate them into solutions, which we do.

Are other agencies using HPC in the way that NOAA is? There has been a lot in the news about HPC assisting the COVID-19 pandemic? Any insight in that area?

Absolutely. Many agencies use supercomputing and GDIT is involved with several of them, including the National Institutes of Health (NIH), the Centers for Disease Control (CDC), and the Environmental Protection Agency (EPA).

  • For the NIH, we have vastly expanded the capability of their Biowulf supercomputer with multiple, complex upgrades over the past several years. Biowulf is the largest supercomputer in the U.S. dedicated to medical research and bioinformatics.
  • For the CDC, we provide comprehensive support for an HPC computing environment used for advanced bioinformatics and for developing scientific applications. We also support CDC's large storage systems that capture and process health laboratory instrument data.
  • For the EPA, we operate and enhance their High-End Supercomputing System and also provide advanced scientific visualization, which is a critical capability for interpreting the massive datasets used in supercomputing applications. And by the way, for the EPA, we also have multiple contracts where we use weather models in our support to the Agency’s air quality simulations, so we are users of supercomputers as well as integrators.

Speaking specifically of COVID-19, which is on everyone’s mind right now, HPC is playing an important role in the search for an understanding of the behavior of the virus, for treatments and for a vaccine. And this isn’t only systems like those we run for NIH and the CDC. There is a massive effort to bring the nation’s supercomputing power and expertise to bear and supercomputers from NSF, NASA, DOE and the Department of Defense (DoD) are being used.

Clearly these are large, complex systems. What have we learned about how to manage these projects?

First and foremost, this is a team sport. To succeed as an HPC Systems Integrator, you need to be able to put together teams covering a wide range of technologies. Not just the ones we mentioned but others like deploying large-scale data centers and delivering power and cooling to these systems. We have learned to do this both on our individual contracts but also across contracts.

And we have a lot of experience working as teams with members distributed across the country. In fact, this is almost becoming the norm. And of course, in addition to knowing how to effectively use teams, we have a group of individuals with decades of experience in every aspect of HPC. They have worked with some of the largest, most powerful supercomputers in the world. There is no substitute for experience!

The GDIT HPC Center of Excellence is our mechanism to share knowledge across our contracts and to have quick access to the expertise that is distributed across those contracts. We constantly reach out across programs for support and to get answers to questions. And we disseminate the information gained and the lessons learned on each contract across our whole HPC portfolio.

Our entire HPC client base benefits from the experience with new technologies, solutions to new problems, and best practices with HPC computing that are shared continually among the members of the HPC Center of Excellence.

Marc, thanks so much for talking with us today. If readers want to know more about how NOAA is harnessing the power of HPC, where can they do that?

I hope readers do want to learn more on this topic; it’s one I find fascinating, of course. In February, NOAA issued a press release on its plans for WCOSS, again, Weather and Climate Operational Supercomputing System II, a new program to serve the National Weather Service’s operational supercomputing needs. We’re really looking forward to continuing our partnership with NOAA.