Intel this week announced that its processors, compute accelerators, and Optane DC persistent memory modules will power Aurora, the first supercomputer in the US projected to feature a performance of one exaFLOP. The system is expected to be delivered in about two years, and goes beyond its initial Xeon Phi specification released in 2014.

The US Department of Energy, Intel, and Cray have signed a contract under which the two companies and DOE’s Argonne National Laboratory will develop and build the Aurora supercomputer capable of a “quintillion” floating point computations per second. The deal is valued at more than $500 million, the system is expected to be delivered sometimes in 2021.

The Aurora machine will be based on Intel’s Xeon Scalable processors, the company’s upcoming compute accelerators based on the Xe compute architecture for datacenters, as well as a next-generation Optane DC persistent memory. The supercomputer will rely on Cray’s 'Shasta' architecture featuring Cray’s Slingshot interconnect, that was announced at Supercomputing back in November. The system will be programmed using Intel’s OneAPI and will also use the Shasta software stack tailored for Intel.

Around two years ago the DOE started its Exascale Computing Project to spur development of hardware, software, and applications for exaFLOP-class supercomputers. The organization awarded $258 million in research contracts to six technology companies, including AMD, Cray, Hewlett Packard Enterprise, IBM, Intel, and NVIDIA. As it turns out, Intel’s approach was considered as the most efficient one for the country’s first Exascale supercomputer.

It is noteworthy that ANL’s Aurora supercomputer back in 2014 was supposed to be based on Intel’s Xeon Phi codenamed Knights Hill produced using the company 10 nm process technology. The plan changed in 2017, when Intel canned the Knights Hill in favor of a more advanced architecture (and the fact that its Xeon processors were approaching a Xeon Phi-like implementation). Apparently, Intel and its partners are confident in the new chips to proceed with the project now.

The Aurora supercomputer will be able to handle both AI and traditional HPC workloads. At present, Argonne National Laboratory says that among other things this machine will be used for cancer research, cosmological simulations, climate modeling, discovering drug response, and exploring various new materials.

“There is tremendous scientific benefit to our nation that comes from collaborations like this one with the Department of Energy, Argonne National Laboratory, industry partners Intel and Cray and our close association with the University of Chicago,” said Argonne National Laboratory Director, Paul Kearns. ​“Argonne’s Aurora system is built for next-generation artificial intelligence and will accelerate scientific discovery by combining high-performance computing and artificial intelligence to address real world problems, such as improving extreme weather forecasting, accelerating medical treatments, mapping the human brain, developing new materials and further understanding the universe — and those are just the beginning.”

Related Reading:

Sources: Intel, Intel, Argonne National Laboratory

Comments Locked

25 Comments

View All Comments

  • webdoctors - Thursday, March 21, 2019 - link

    Do you know this ? "most efficient one for the country’s first Exascale supercomputer."

    Curious what benchmarks was used to derive that statement.
  • blu42 - Thursday, March 21, 2019 - link

    They'll be using gen12 (AKA Xe) Intel GPUs -- must be leaps and bounds more efficient than stuffing xeons in racks, and about as efficient as using NVidia GPUs ; )
  • TeXWiller - Thursday, March 21, 2019 - link

    It's the first time I have heard that statement. The Aurora was probably late due to the unmentionable technical difficulties, so they bumped the project straight to the next stage of delivery.
  • mode_13h - Friday, March 22, 2019 - link

    Yes, exactly. I'd be surprised if there was enough transparency in their purchasing process to support that statement. We'll probably never know exactly why they chose Intel.
  • HStewart - Thursday, March 21, 2019 - link

    I would to find some specs on these machines. It sounds like a combination of new technology coming

    Xeons based on new architexture
    New GPU architexture which could be reason why Phi is going away
    New Optane memory which could explain why Intel does not need Micron anymore.
  • Yojimbo - Thursday, March 21, 2019 - link

    It was Micron that broke the relationship with Intel, not the other way around. Micron and Intel are chasing different markets for NAND and 3D XPoint. Micron wants to make money through volume and Intel can afford to not worry about the margins of the actual parts but instead think of it as a part of the value of their entire platform.

    Phi went away because of a convergence of deep learning and HPC, I believe, and because Intel could never get much traction in its use outside supercomputers. Intel was obviously already developing a discrete GPU, but in an interview with an Intel scientist I read soon after the existence of the new A21 supercomputer came out (without many public details) the guy said that Intel had been planning on what was going into the new A21 machine for a while but that they have moved its schedule up in response to the cancelation of the Aurora and the shutdown of the Xeon Phi line.
  • Laubzega - Thursday, March 21, 2019 - link

    More information here: https://www.nextplatform.com/2019/03/18/intel-to-t...
  • HStewart - Thursday, March 21, 2019 - link

    Interesting article but a couple of comments on this

    1. Sunny Cove should have single thread performance increase also because of enhancement of multiple execution units

    2. Gen 11 graphics are replacements for exist iGPU's and have MX130 or higher performance. Gen 12 is what Xe graphics are and should have much higher performance. I believe also there will be consumer level ones also. Must be significant performance increase to shut down Phi processors

    3. it would foolish to think Intel was not working on it on fabs for Optane changes. I think Micron did not fit there needs.
  • Yojimbo - Friday, March 22, 2019 - link

    It doesn't matter what Intel was or wasn't doing. The choice was entirely Micron's to buy the fab. Micron only bought out the fab because they wanted to. Now I am sure that at some point it was clear between the two partners that they wouldn't work together any more, but that doesn't mean that Micron had to buy the plant. Micron bought the plant because they believe in their own future for 3D XPoint using their technology that is divergent from Intel's.

    A telling fact is that it is Micron that is changing from floating gate to charge trap for their 3D NAND while Intel is continuing with floating gate. Intel's perspective just did not fit with Micron's interests and Micron has plenty of money now to go their own way.

    The selling point of the Phi processors was their ability to run unmodified x86 code and also to supposedly benefit somewhat from slight modification to standard code. But I think to really use them as accelerators the modification that needed to be done was along the lines of what was needed for a GPU. GPUs outperformed Xeon Phi overall, I guess, because commercial customers never latched onto Xeon Phi much. Once Intel saw that happening they would have known they needed a replacement for Xeon Phi. But I think the arrival of deep learning sounded the death knell for Phi earlier than it would have otherwise been sounded.

    As far as the performance of Xe, you have to take into account that what Intel can shove down people's throats is not the only factor to consider. The DOE were most likely not completely satisfied with the way the Aurora machine was shaping up. That would have spurred Intel to change strategies and dump Phi even if they had an inaccurate idea at that point in time of the performance they would get out of Xe.
  • blu42 - Friday, March 22, 2019 - link

    @Yojimbo, your guess re the Phi is quite close -- while it could run unmodified x86 code, its proper utilization required programming techniques arguably more convoluted than those for GPUs, thus it was regularly outperformed by similarly-TDP'd GPUs. Abstractly speaking, Phi had the worst of both x86 and GPU worlds.

Log in

Don't have an account? Sign up now