Power Consumption and Thermal Characteristics

The power consumption at the wall was measured with a 4K display being driven through the HDMI port of the system. In the graph below, we compare the idle and load power of the ASRock Industrial NUC BOX-1260P and 4X4 BOX-5800U with other systems evaluated before. For load power consumption, we ran the AIDA64 System Stability Test with various stress components, as well as our custom stress test with Prime95 / Furmark, and noted the peak as well as idling power consumption at the wall.

Power Consumption

The numbers are consistent with the TDP and suggested PL1 / PL2 values for the processors in the systems, and do not come as any surprise. With a 64W PL2, the NUC BOX-1260P has the highest load power consumption. The choice of components (such as a Gen 4 SSD) also contribute to the higher idle power numbers. The Cezanne system is not that far behind, with a peak of around 75W despite a much lower PL2. It is likely that the Ryzen 7 5800U treats the configured power range as only a suggestion, and tries to go much higher (as high as 54W) as long as thermal headroom is available.

Stress Testing

Our thermal stress routine is a combination of Prime95, Furmark, and Finalwire's AIDA64 System Stability Test. The following 9-step sequence is followed, starting with the system at idle:

  • Start with the Prime95 stress test configured for maximum power consumption
  • After 30 minutes, add Furmark GPU stress workload
  • After 30 minutes, terminate the Prime95 workload
  • After 30 minutes, terminate the Furmark workload and let the system idle
  • After 30 minutes of idling, start the AIDA64 System Stress Test (SST) with CPU, caches, and RAM activated
  • After 30 minutes, terminate the previous AIDA64 SST and start a new one with the GPU, CPU, caches, and RAM activated
  • After 30 minutes, terminate the previous AIDA64 SST and start a new one with only the GPU activated
  • After 30 minutes, terminate the previous AIDA64 SST and start a new one with the CPU, GPU, caches, RAM, and SSD activated
  • After 30 minutes, terminate the AIDA64 SST and let the system idle for 30 minutes

Traditionally, this test used to record the clock frequencies - however, with the increasing number of cores in modern processors and fine-grained clock control, frequency information makes the graphs cluttered and doesn't contribute much to understanding the thermal performance of the system. The focus is now on the power consumption and temperature profiles to determine if throttling is in play.

Our first look is at the NUC BOX-1260P. The power consumption and temperature plots against time are presented below.


The system is able to easly handle CPU-only and CPU+GPU loading. However, GPU-only loading ramps up the temperature close to the junction (105C), with the package power having to throttle slightly to keep the temperature in check. This was not seen in AIDA64's system stress test components.

Moving on to the 4X4 BOX-5800U, we first look at the temperature and power consumption plots for the system configured with the default BIOS options (Normal mode).


The CPU package power tracking (PPT) and package power consumption numbers reported by HWiNFO seem highly unreliable in the context of the independently recorded at-wall power consumption. We make some inferences based on the temperatures and the at-wall power numbers. It is clear that the system 'throttles' as soon as the package temperature hits 95C.  The GPU-only loading scenario is not as stressful as it was for the Alder Lake-P system. It appears that the system is able to sustain 65W at the wall when both CPU and GPU are loaded, while keeping the temperature of the package around 95C.

Finally, we see the behavior of the same parameters when the same stress sequence is applied with the CPU operating mode set to 'Performance' in the BIOS.


The performance mode appears to set the sustainable at-wall power consumption number around 68W. The cooling solution is good enough for the configured PL1 / PL2 values, but it can't keep up with the 54W maximum target TDP of the Ryzen 7 5800U for a long time.

HTPC Credentials Miscellaneous Aspects and Concluding Remarks
Comments Locked

34 Comments

View All Comments

  • yannigr2 - Friday, August 5, 2022 - link

    Nice review thanks.
    Considering Intel's optimizations for 3D benchmarks, 1-2 games are a necessity for closer to real life results.
  • Dante Verizon - Friday, August 5, 2022 - link

    Yes, some games and real world benchmarks...
  • PeachNCream - Saturday, August 6, 2022 - link

    Probably costs too much in terms of time and money to use real world applications. :(
  • xol - Friday, August 5, 2022 - link

    the color legend on the web/javascript seems to be messed up/makes no sense

    ..

    also I think to call the Xe gpu in the intel box as "new" is not quite right - it's not the new architecture (ie Arc) or even close, just shares the branding - it's clearly from the same series that gave us tehe HD 400 way back, and the HD 770 (such as found in the i5-12500) - the difference is that this box has 3x the EU (@96) compared to the i5-12500 (32 EU)... hence the good performance.
  • xol - Friday, August 5, 2022 - link

    postscript I'm just gonna say that using the old HD graphics architecture is no bad thing .. at least the drivers will work ! (jokes on Arc for now_
  • abufrejoval - Friday, August 5, 2022 - link

    Whatever the issues with ARC drivers might be, the iGPU drivers for Linux work quite well also with the newer Xe based variants. The worst I had to do was to force the i915 drivers to accept the unknown Xe PCI device ID via a boot parameter for the kernel.

    No issues on Windows 10/11 either, while there could be trouble with AMD GPUs on Windows server editions because AMD likes to save money on driver signatures there. I used to run Windows server on earlier APUs (Richmond/Kaveri) and had to fiddle hard to get them working anyway.
  • deil - Sunday, August 28, 2022 - link

    +1 Xe seems to work nice, EXCEPT os 21.10->22.04 upgrade. I had only one of 11400h, and it failed hard on gpu driver to the point where after bios integrated screen was completely unresponsive. (external worked fine though) Purge -> reboot -> new installation, fixed it. I always run proposed, but still that was unusual. I never had screen just nope and completely middle finger me. Otherwise, it's fine, but I personally feel like it's never under 50'c and It's annoying to use for longer, if there is any load. Might be acer fault, but I feel like all of intels are toasters now.
  • abufrejoval - Friday, August 5, 2022 - link

    I'm afraid there is no Thunderbolt in the Intel variant either..., I checked all references I could find.

    And that's really too bad, because for this to work as a µ-server I'd use the TB connector to attach 10GBase-T Ethernet based on AQC107 e.g. as sold by Sabrent. The main advantage is really solid Linux support since years, much better than for the various 2.5 variants from RealTek and Intel.

    The AQC113 chip is out there (hopefully fully AQC107 driver compatible), please ASRock add it to the base board on both devices and I shall buy one of each at least!

    2.5Gbit/s is a long overdue improvement over Gbit, but no longer adequate either. And while it will a few Watts when used at max speed (I think about 3 with Green Ethernet), these boxes aren't running on batteries.
  • abufrejoval - Friday, August 5, 2022 - link

    Ok, now finshed reading the review ;-)

    I guess the 2nd set of PCIe x4 would become allocated to Thunderbolt on the Intel variant, if that's really working. AFAIK at least some re-timer chips or similar are both required and in short supply, which is why I'd want working proof.

    AFAIK the AQC113 can do 10GBase-T out of PCIe 4.0 x1 or PCIe 3.0 x2 (or even PCIe 2.0 x4, like the AQC107). So just by dropping the 2nd Gbit, they'd gain all the resources required on Cezanne.

    Yes, a couple of millimeters in height and a Noctua cooler would make all the difference in conjunction with open power and fan settings in the BIOS.

    Since these mainboard are so similar, perhaps somebody (even ASRock) could come up with an alternate chassis?

    I have zero software issues with various kinds of Linux on my 5800U based notebook, while I'm pretty sure all that P/E drama isn't yet sorted out in enterprise Linux. I'm pretty sure that E/P won't reduce the energy footprint on such a NUC in my operations, nor provide better performance under load.

    But with systems so closely matched, at least now I could find out.
  • ganeshts - Friday, August 5, 2022 - link

    Yes, the Type-C port close to the Type-A one is indeed Thunderbolt 4. I tested out by connecting the Plugable Thunderbolt 4 Hub to it, and then connecting a Thunderbolt 3-only SSD to one of the downstream ports. I made sure that the TB3 SSD delivered PCIe performance with a quick CrystalDiskMark workload.

    ASRock also mentions it in their block diagram..

Log in

Don't have an account? Sign up now