Continuing with our coverage of today’s spate of SC15 announcements, we have NVIDIA. Having already launched their Tesla M40 and M4 server cards last week to get ahead of SC15 news and leaks, the company is at the show this week showing off their latest Tesla products. NVIDIA needs no real introduction at this point, and these days their presence at SC15 is more about convincing specific customers/developers about the practicality of using GPUs and other massively parallel accelerators for their specific needs, as at this point the use of GPUs and other accelerators in the Top500 supercomputers continues to grow.

Along with touting the number of major HPC applications that are now GPU accelerated and the performance impact of that process, NVIDIA’s other major focus at SC15 is to announce their next US government contract win. This time the National Oceanic and Atmospheric Administration (NOAA) is tapping NVIDIA to build a next-gen research cluster. The system, which doesn’t currently have a name, is on a smaller scale than the likes of Summit & Sierra, and will be comprised of 760 GPUs. The cluster will be operational next year, and giving the timing and the wording as a “next-generation” cluster, it’s reasonable to assume that this will be Pascal powered.

The purpose of the NOAA cluster will be to develop a higher resolution and ultimately more accurate global forecast model. To throw some weather geekery on top of some technology geekery, in recent years the accuracy of the NOAA’s principle global forecast model, the GFS, has fallen behind the accuracy of other competing models such as the European ECMWF. The most famous case of this difference in accuracy is in 2012, when the GFS initially failed to predict that Hurricane Sandy would hit the US, something the ECMWF correctly predicted. As a result there has been a renewed drive towards improving the US models and catching up with the ECMWF, which in turn is what the NOAA’s research cluster will be used to develop.

Weather forecasting has in turn been a focus of GPU HPC work for a couple of years now – NVIDIA already has Tesla wins for supercomputers that are being used for weather research – but this is the first NOAA contact for the company. Somewhat fittingly, this comes as the NOAA’s Geophysical Fluid Dynamics Laboratory already runs their simulations out of Oak Ridge, home of course to Titan.

Comments Locked

15 Comments

View All Comments

  • wingless - Monday, November 16, 2015 - link

    I have an all Nvidia rig with a G-sync display, 3D Vision 2, and a 980...but I would go with AMD for compute before Nvidia. Maybe Nvidia has made accessing their sub-par compute capabilities easier with CUDA, I suppose?
  • testbug00 - Monday, November 16, 2015 - link

    Given they're using Pascal cards they likely have FP64 capabilities added back in.

    And there are plenty of cases where FP32 is enough. Although I don't know if this is one of them.
  • mas6700 - Monday, November 16, 2015 - link

    Got any references to support your snide remark about Nvidia's "sub-par" compute performance? Lets see some numbers showing how well your AMD card does programmed with OpenCL versus a Quadro M6000 running the same code using CUDA. Yeah, thought so....
  • testbug00 - Monday, November 16, 2015 - link

    You do realize Wingless is talking about FP64? Right.

    AMD needs less than 10% utilization for their too end pro card card to beat M6000 at 100%.
  • HighTech4US - Monday, November 16, 2015 - link

    And AMD lacks greatly in the software infrastructure thus the lack of AMD wins.

    Running at the 100% speed you mention does no good if your wheels are off the ground.

    People like yourself Poo poo-ed when Nvidia stated they were a software company.

    Software sells hardware as everyone can see by the current earnings reports by Nvidia (Great) and AMD (Sucks).
  • testbug00 - Monday, November 16, 2015 - link

    For the comparison of compute (OP implied FP64) the M6000 is garbage and needs the AMD card to be utilized under 10% WHILE THE NVIDIA CARD IS AT 100%. Now, Nvidia offers a far more reasonable FP64 card than the M6000. At which point having "only" about 70% of AMD's raw power is enough thanks to their better software.

    And, Nvidia is in great part a software company. I just wish they could write good software outside of professional realms. Most of the sh*t they've written for gaming has been intentionally coded poorly or an inefficient manner. Or they don't know how to code.
  • Yojimbo - Monday, November 16, 2015 - link

    That "raw power" is as terrible a measurement for compute as "fill rate" is for graphics performance. It's more than just "better software", I think.
  • testbug00 - Tuesday, November 17, 2015 - link

    Maybe, but, having better software, etc. Allows one to utlize hardware better, etc. My point was that if you're looking to do heavy FP64 compute talking about the M6000 is pointless. As you would have to try to screw the AMD option to get worse performance.

    Raw power is bad for many comparisons, but, if you have over a magnitude more power it probably is quite meaningful for performance.
  • Yojimbo - Monday, November 16, 2015 - link

    Yes but no one would buy an M6000 for FP64. M6000 is a workstation graphics card anyway, not a compute card. Why does it need FP64?
  • testbug00 - Tuesday, November 17, 2015 - link

    I fully agree. My response is to mas6700 who suggested such a card.

Log in

Don't have an account? Sign up now