Power Delivery Thermal Analysis

A lot more focus has been put on power delivery specifications and capabilities, not just by manufacturers but as a result of users' demands. In addition to the extra power benefits from things like overclocking, more efficient designs in power deliveries and cooling solutions aim to bring temperatures down. Although this isn't something most users ever need to worry about, certain enthusiasts are bringing more focus onto each board's power delivery. The more premium models tend to include bigger and higher-grade power deliveries, with bigger and more intricate heatsink designs, with some even providing water blocks on ranges such as the ASUS ROG Maximus Formula series and the ASRock Aqua.

The 19-phase power delivery on the GIGABYTE Z590 Aorus Master (operating in 18+1)

Testing Methodology

Our method of testing out if the power delivery and its heatsink are effective at dissipating heat, is by running an intensely heavy CPU workload for a prolonged method of time. We apply an overclock which is deemed safe and at the maximum that the silicon on our testbed processor allows. We then run the Prime95 with AVX2 enabled under a torture test for an hour at the maximum stable overclock we can which puts insane pressure on the processor. We collect our data via three different methods which include the following:

  • Taking a thermal image from a birds-eye view after an hour with a Flir Pro thermal imaging camera
  • Securing two probes on to the rear of the PCB, right underneath CPU VCore section of the power delivery for better parity in case a probe reports a faulty reading
  • Taking a reading of the VRM temperature from the sensor reading within the HWInfo monitoring application

The reason for using three different methods is that some sensors can read inaccurate temperatures, which can give very erratic results for users looking to gauge whether an overclock is too much pressure for the power delivery handle. With using a probe on the rear, it can also show the efficiency of the power stages and heatsinks as a wide margin between the probe and sensor temperature can show that the heatsink is dissipating heat and that the design is working, or that the internal sensor is massively wrong. To ensure our probe was accurate before testing, I binned 10 and selected the most accurate (within 1c of the actual temperature) for better parity in our testing.

To recreate a real-world testing scenario, the system is built into a conventional desktop chassis which is widely available. This is to show and alleviate issues when testing on open testbeds which we have done previously, which allows natural airflow to flow over the power delivery heatsinks. It provides a better comparison for the end-user and allows us to mitigate issues where heatsinks have been designed with airflow in mind, and those that have not. The idea of a heatsink is to allow effective dissipation of heat and not act as an insulator, with much more focus from consumers over the last couple of years on power delivery componentry and performance than in previous years.

For thermal imaging, we use a Flir One camera to indicate where the heat is generated around the socket area, as some designs use different configurations and an evenly spread power delivery with good components will usually generate less heat. Manufacturers who use inefficient heatsinks and cheap out on power delivery components should run hotter than those who have invested. Of course, a $700 flagship motherboard is likely to outperform a cheaper $100 model under the same testing conditions, but it is still worth testing to see which vendors are doing things correctly. 

Thermal Analysis Results


We measured 86.6ºC on the hottest part of the CPU socket during our testing

The GIGABYTE Z590 Aorus Master has a large 19-phase power delivery split into an 18-phase design for the CPU, with a single power stage for the SoC. The 18-phase CPU section uses eighteen Intersil ISL99390 90 A power stages, which are doubled up with nine Intersil ISL6617A doublers. The PWM controller of choice is the Intersil ISL69269, operating in a 9+1 configuration (18+1). Cooling the power delivery is a pair of large and weighty heatsinks interconnected by a single heat pipe. The heatsinks themselves include many aluminum fins designed to direct and catch passive airflow when installed into a chassis.

As we are still making our way through our stack of Z590 motherboards, it isn't easy to get an overall picture of power delivery thermal efficiency and the efficiency of the cooling designs themselves with just a few results. Typically we would see cheaper and less efficient designs running hotter, with more expensive boards with large designs spreading the load as such across sometimes as much as 20-phases.

Touching on the VRM thermal performance of the GIGABYTE Z590 Aorus Master, it did run a little warmer than the other models we've tested so far. Despite the large 18-phase design for the CPU, it doesn't seem to have the efficiency of its direct phase designs that we've seen on its other models. The GIGABYTE does run around 11 to 13ºC cooler than the ASRock Z590 Steel Legend, but it's 9-10ºC behind MSI's MEG Z590 Ace. The Z590 Taichi is the lone ranger so far with an active cooling system, but none of the boards we've tested in regards to VRM thermals have been bad. We expected more from GIGABYTE given what we've seen previously, but it's still competitive and well within what's expected on the specifications.

Overclocking GIGABYTE Z590 Aorus Master Conclusion
Comments Locked

39 Comments

View All Comments

  • JVC8bal - Friday, April 30, 2021 - link

    I don't understand your point you responding to what I wrote. This has nothing to do with AMD vs. Intel. I guess there is a MAGA-like AMD crown on here looking for conspiracies and confrontations.

    As written above, the PCIE 4.0 specification implementation first found on x570 showed up on Intel's first go-around. If anything can be said, those working on the Intel platform motherboards learned nothing from prior work on the AMD platform. But whatever, read things through whatever lense you do.
  • TheinsanegamerN - Friday, April 30, 2021 - link

    I thought it was more of a BLM- like intel crowd that looks for any pro AMD comment and tries to railroad it into the ground while dismissing whatever merit the original comment may have had
  • TheinsanegamerN - Wednesday, April 28, 2021 - link

    I'm dissapointed that these newer boards keep cutting down on I/O. This board only offers 3 PCIe X16 slots, the third is only x4 and the second cuts half the bandwidth from the first slot despite multi GPU being long dead. So if you had, say, a sound card and a capture card, you'd have to cut your GPU slot bandwidth in half AND have one of the cards right up against the GPU cooler.

    IMO the best setup would have all the x1/x4 slots ont he bottom of the motherboard so you can use a tiriple slot GPU and still have 3 other cards with room between for breathing, with all the bottom slots fed fromt he chipset not the CPU.

    And for those whoa re going to ask: "why do you want more expansion everything is embedded now blah blah". If you only have a GPU and dont use the other slots that's why you have mini ITX, or micro ATX if you want a bigger VRM. Buying a big ATX board for a single expansion card is a waste.
  • abufrejoval - Thursday, April 29, 2021 - link

    While I am sure they'd love to sell you everything you're asking for, I'm less convinced you'd be ready to pay the price.

    You can't get anything but static CPU PCIe lane allocations out of a hard wired motherboard, with bi/tri/quad-furication already being a bonus. You need a switch on both ends for flexibility.

    That's what a PCH basically is, which allows you to oversubscribe the ports and lanes.

    In the old 2.0 days PCIe switch chips were affordable enough ($50?) to put next to the CPU and gain full multiple x16 slots (still switched), but certainly not without a bit of latency overhead and some Watts of power.

    All those PCIe switch chip vendors seem to have been bought up by Avago/Broadcom who have racked up prices, probably less because they wanted to anger gamers, but because these were key components in NVMe based storage appliances where they knew how much they could charge (mostly guessing here).

    And then PCIe 3.0 and 4.0 are likely to increase motherboard layout/trace challenges, switch chip thermals or just generally price to the point, where going for a higher lane-count workstation or server CPU may be more economical and deliver the full bandwidth of all lanes.

    You can get PCIe x16 cards designed to hold four or eight M.2 SSDs that contain such a PCIe switch. Their price gives you some idea of the silcon cost while I am sure they easily suck 20 Watts of power, too.

    If you manage to get a current generation GPU with PCIe 4.0, that gives you PCIe 3.0 x16 equivalent performance even at x8 lanes. That's either enough, because you have enough VRAM, or PCI 4.0 x16 won't be good enough either. At both 16 or 32GByte/s PCIe is little better than a hard disk, when your internal VRAM delivers north of 500GB/s...because that's what it takes to drive your GPU compute or the game.

    The premium for the ATX form factor vs a mini ITX is pretty minor and I couldn't care less how much of the tower under my desk is filled by the motherboard. I tend to go with the larger form factors quite simply because I value the flexibility and the ability to experiment or recycle older stuff. And it's much easier to manage noise with volume.
  • TheinsanegamerN - Friday, April 30, 2021 - link

    Boards like the gigabyte X570 elite exist, which have a plethora of USB ports and multiple additional expansion ports none of which sap bandwidth from the main port.

    This master is a master class is taking money for looking "cool" and offering nothing of value.
  • Spunjji - Thursday, April 29, 2021 - link

    Agreed, that layout is a big mess and rather defeats the point of having an ATX board - but then a huge number of these are just going to go into systems that have one GPU and nothing else, but the buyer wants ATX just because that's what they're used to 🤷‍♂️
  • Linustechtips12#6900xt - Thursday, April 29, 2021 - link

    AGREED, my b450m pro 4 has like 4 USB 3, 1 USB-a 10gbps, 1 USB-c 10gbps and 2 USB 2.0. frankly amazing io and i couldn't appreciate it more
  • Molor1880 - Thursday, April 29, 2021 - link

    Not completely the motherboards fault though. There are only 20 PCIe 4.0 lanes from the CPU. 4 for IO and 16 for graphics. There are no general purpose PCIe 4.0 lines off the Z590 chipset, and the DMI link is wider, but still just PCIe 3.0. When Intel starts putting general purpose PCIe 4.0 lanes on the chipset (690?), a lot of those issues would be resolved. Otherwise, it's a bit of a wonky workaround to shift things for one generation.
  • Silver5urfer - Wednesday, April 28, 2021 - link

    Unfortunately GB BIOS is not that stellar ? And why does this mobo have a fan to cool the 10G LAN chip ? I do not see that with some other boards like X570 Xtreme, X570 Prestige Creation and Maximus XIII Extreme.
  • TheinsanegamerN - Thursday, April 29, 2021 - link

    Gigabyte BIOS is fine, the UI is a tad clunky, but hey it's a huge leap from BIOSes from the core 2 era. Just takes a little getting used to.

Log in

Don't have an account? Sign up now