The best thing about manufacturing Field Programmable Gate Arrays (FPGAs) is that you can make the silicon very big. The nature of the repeatable unit design can absorb issues with a process technology, and as a result we often see FPGAs be the largest silicon dies that enter the market for a given manufacturing process. When you get to the limit of how big you can make a piece of silicon (known as the reticle limit), the only way to get bigger is to connect that silicon together. Today Intel is announcing its latest ‘large’ FPGA, and it comes with a pretty big milestone with its connectivity technology.

One of the elements driving this industry forward is packaging technology. We’ve covered in detail elements like TSMC’s 2.5D Chip-On-Wafer-On-Substrate (COWOS) packaging used in GPUs, Intel’s embedded multi-die interconnect bridge (EMIB), stacking technology like FOVEROS, and as we migrate into smaller chiplet based silicon, each will become crucial to finding the best way to produce the end chip that goes into a million systems.

Despite Intel’s best diagrams about its EMIB technology, showing many die connected together from multiple different process nodes, one major barrier has eluded the company. Until this point, all we had seen from EMIB was it connecting one high-powered die, like a GPU, to a low-powered die, like HBM. One of the criticisms to having only known Intel products connect one high-powered and one low-powered die is if the EMIB connection wasn’t thermally stable to withstand power-cycling between two die.


One of Intel's mock-ups of how future processors might look like

When connecting two die together in a substrate, especially high-powered die with vias or a BGA design, mechanical stresses have to be taken into account, especially if different metals are at play. Thermal expansion and contraction is a critical point of failure, especially when dealing with embedded and long life-cycle designs. Not only the expansion and contraction of metals, but when dealing with organic substrates holding the packaging technology, making the substrate inextricably thin also severely increases long-term feasibility concerns, especially if high-powered die are used for connectivity.

With Intel’s new FPGA, the Stratix 10 GX 10M, the concern seems to have disappeared. This new product, designed as a big FPGA for the ASIC prototyping and emulation market, combines two large 5.1M logic element FPGAs with three EMIB connections, producing an overall chip with an average TDP from 150W up to 400W with advanced cooling.


A total of 7 EMIB connections, but it's the three in the middle that are the milestone

The ASIC prototyping and emulation market, while a small market revenue wise (Intel stated ~$300-$500M/yr), is always requesting bigger and bigger FPGAs in order to be able to fit more and more of their ASIC designs onto as few FPGAs as possible in order to get the most accurate results. These chips ultimately end up running at a low frequency for accuracy, anywhere from 50 MHz to 300 MHz, but Intel states that this new Stratix 10 GX 10M design can easily replace four of its old GX 2800 FPGAs with double the connectivity and even a 40% power reduction for the same workload.

The design of the FPGA is around those two 5.1M logic element dies, connected together with three EMIB connections. These use the AIB protocol running at over 1 GHz, and form part of the 25920 connection pins across the whole chip which has another four EMIB connections to transceivers as shown in the diagram.

At 10.2 million logic elements, this FPGA eclipses the Xilinx VU19P announced in August which had 9 million elements (8172k flip-flops, 4086k LUTs). The Stratix 10 GX 10M also contains 6912 DSP elements and 48 transceiver outputs at 17.4 Gbps. Intel states that these are designed primarily as PCIe 3.0/4.0 support, and that the FPGA supports H-tiles for connectivity for customers interested in custom designs.

The launch of this new hardware coincides with Intel’s FPGA Tech Event in China, which is one of the primary markets for this product. Intel states the hardware has already been with key partners almost a year (one of the early customers is China based), but is now in production for the wider market. On the topic of Intel’s high demands for its 14nm parts, the company stated that the volume isn’t that high for this sort of product, and they won’t have any issues. The company (at least, the FPGA part of the company) did explain that the use of EMIB in this fashion means that their two-die approach assists with yield.

Personally, the fact that Intel is strapping two high powered die (~75W to ~150W each) together using multiple EMIB connections is a key step into driving the EMIB technology to the wider market. With this as a proof of concept, it paves the way for better multi-die CPU designs as well as the promise of EMIB (and Foveros) in future discrete GPU products.

Related Reading

Source: Intel

Comments Locked

31 Comments

View All Comments

  • Teckk - Wednesday, November 6, 2019 - link

    So a competition ceases to be one when a player involved stops for whatever reasons, and so no one wins. Got it.
    Is it AMDs problem that Intel has fab issues? Should TSMC have halted their progress until Intel fixed their issues?
    AMD is nowhere near ruling the world anytime soon, and I don't think OP meant that either, but they have good products. If you say TSMC fabbing + AMD design is inferior compared to Intel's, then it's a surprisingly good competition for such a product.
  • prime2515103 - Wednesday, November 6, 2019 - link

    "So that is a hollow victory when you overtake your competitor when he is unable to compete."

    So, any dominance Intel has had over AMD has been hollow because AMD couldn't compete due to a lack of the kind of R&D resources Intel has. Got it.
  • patrickjp93 - Wednesday, November 6, 2019 - link

    AMD had plenty of R&D resources and squandered them. They stuck to 28nm for ages when 20nm was being produced (could have made Excavator mobile-competitive and improved their Opterons too). AMD cut corners with Bulldozer to ridiculous extremes and paid the price.
  • Spunjji - Wednesday, November 6, 2019 - link

    This is an ill-informed comment. Firstly, AMD's R&D had nothing to do with process technology by the 28nm era because they'd already spun it off to GlobalFoundries.

    Secondly, GloFo (to whom AMD were contractually bound at that point) cancelled their 20nm node, which AMD could do nothing about.

    Thirdly, TSMC and Samsung did have a 20nm node, but in both cases it was fairly awful because they hadn't integrated FinFET technology, so the leakage (already not great at 28nm) was atrocious. This was just for low-power mobile chips - for high-power desktop chips it would have been a major regression. It's noteworthy that AMD's graphics division and Nvidia's desktop GPU division both stayed the hell away from that node.

    So - even if AMD had somehow been able to prevent GloFo canning 20nm and/or found a way to jump ship to TSMC without breach of contract, it wouldn't have been of any use whatsoever for their products.
  • patrickjp93 - Wednesday, November 6, 2019 - link

    It's a perfectly informed comment. AMD CHOSE not to move from GloFo 28nm to TSMC 20nm when it became available to at least give them some hope in mobile space. Yes, AMD would have had to pay some cash to get out, but they could have negotiated from a position of strength. GloFo is now next to dead without AMD's business.

    20nm's leakage was very low and tight, which made it terrible for high-clock chips, but the mobile game in CPUs and SOCs was maxing in the low to mid 3GHz range, so actually the low leakage was good for temperatures and efficiency, especially with HDL in play.

    Nvidia stayed away because they already had their custom 16nm (12) plans with TSMC in negotiation at that point, and AMD was failing spectacularly on both CPU and GPU fronts (again, mostly from cutting corners on the CPU side which destroyed competitiveness, sales, and thus cashflow for further R&D.

    And you skipped the core of my argument. AMD cut every corner they could on Bulldozer and failed massively for it. Many of the seniors who left said Excavator was well within design capability back at the start of the Construction Core family, and in fact the architecture never came close to the original planned capability because of poor management. The problem was their penny-pinching CEO. If Sandy Bridge had been up against something roughly Excavator class from the start, AMD would be in a much better position today.
  • prime2515103 - Wednesday, November 6, 2019 - link

    So, if the problem is with engineering incompetence, the victory is hollow, but when it's from incompetent decision making, it's not. I'll write that down.
  • yeeeeman - Wednesday, November 6, 2019 - link

    One more thing, Intel bought Altera for ~17B dollars. That is triple what AMD is worth. How could AMD not leave up that to just Intel?
  • flgt - Wednesday, November 6, 2019 - link

    AMD has no expertise in FPGA design or design of the required supporting toolchains, so that's pretty much a non-starter.
  • Eliadbu - Wednesday, November 6, 2019 - link

    There are 2 primary players in this market :
    Altera (Intel) and xilinx. Getting to the FPGA market would take a lot of resources and will have AMD face two well established with a lot of technology and IP competitors, and the TAM is not even close to justify the R&D and risk getting in this market. I would even say AMD would have better value getting in the ARM and RISC V market than to the FPGA.
  • FreckledTrout - Wednesday, November 6, 2019 - link

    I disagree. AMD is so small they need focus. They focus on server, desktop, mobile along with graphics that should be enough for now.

Log in

Don't have an account? Sign up now