CPU Benchmark Performance: Simulation And Rendering

Simulation and Science have a lot of overlap in the benchmarking world, however for this distinction we’re separating into two segments mostly based on the utility of the resulting data. The benchmarks that fall under Science have a distinct use for the data they output – in our Simulation section, these act more like synthetics but at some level are still trying to simulate a given environment.

We are using DDR5 memory at the following settings:

  • DDR5-4800(B) CL40

Simulation

(3-1) DigiCortex 1.35 (32k Neuron, 1.8B Synapse)

(3-2a) Dwarf Fortress 0.44.12 World Gen 65x65, 250 Yr

(3-2b) Dwarf Fortress 0.44.12 World Gen 129x129, 550 Yr

(3-2c) Dwarf Fortress 0.44.12 World Gen 257x257, 550 Yr

(3-3) Dolphin 5.0 Render Test

(3-4a) Factorio v1.1.26 Test, 10K Trains

(3-4b) Factorio v1.1.26 Test, 10K Belts

(3-4c) Factorio v1.1.26 Test, 20K Hybrid

When it comes to simulation, the combination of high core frequency and better IPC performance gives Intel's 12th Gen Core series the advantage here in most situations.

Rendering

(4-1) Blender 2.83 Custom Render Test

(4-2) Corona 1.3 Benchmark

(4-3a) Crysis CPU Render at 320x200 Low

(4-3b) Crysis CPU Render at 1080p Low

(4-3c) Crysis CPU Render at 1080p Medium

(4-4) POV-Ray 3.7.1

(4-5) V-Ray Renderer

(4-6a) CineBench R20 Single Thread

(4-6b) CineBench R20 Multi-Thread

(4-7a) CineBench R23 Single Thread

(4-7b) CineBench R23 Multi-Thread

Looking at performance in the rendering section of our test suite, both the Core i7 and Core i5 performed creditably. The biggest factor to consider here is a higher core and thread count plus IPC performance will equal more rendering power.

CPU Benchmark Performance: Power, Office, And Science CPU Benchmark Performance: Encoding and Compression
Comments Locked

196 Comments

View All Comments

  • Otritus - Wednesday, March 30, 2022 - link

    Chiplets should not give much of a pricing advantage. Going from monolithic dies to chiplets adds manufacturing, validation, and design costs. You save money from the chiplets having better yields and being able to reuse chiplets (like AMD did with their IO die). Intel’s consumer processors aren’t that big and yields are good, so it’s probably a wash overall in pricing (excluding the money Intel saves from in-house manufacturing). Chiplets would be a competitive advantage for big processors (EPYC vs Xeon or Navi 31 vs AD102) because of yields.
  • vlad42 - Friday, April 1, 2022 - link

    Nonsense, chiplets absolutely provide a price advantage once you are using more than one CPU chiplet. There's a reason why AMD has done so well in the HEDT, workstation, and server markets. The chiplets allow them to provide more cores for less money than Intel (the yields on those high core count monolithic chips are abysmal by comparison).
  • Qasar - Friday, April 1, 2022 - link

    ive read somewhere thats part of the reason why intel cant offer more cores then they do, and why they also went with the P/E core setup. they just can't make the cpus that big with the big cores.
  • mode_13h - Saturday, April 2, 2022 - link

    Die size gets expensive as you scale it up, because not only do you get fewer chips/wafer, but yield becomes a major factor. This is even more true on a young process, such as "Intel 7", and why smaller chips tend to be the first to utilize them.

    In one way or another, cost is *always* a factor, whenever companies are deciding on the parameters of a new chip.
  • Qasar - Saturday, April 2, 2022 - link

    i think, the other reason was power consumption, if intel uses this much power with what 10 P cores ( and no e cores ) i dont want to know what 12, or even 16 cores would use.
  • mode_13h - Sunday, April 3, 2022 - link

    > if intel uses this much power with what 10 P cores ( and no e cores )
    > i dont want to know what 12, or even 16 cores would use.

    That's not necessarily a direct tradeoff. They could cap the power (i.e. clocks) and still deliver more performance than 8 P-cores.

    Of course, the other thing they could do is trade more die area & clock speed for a uArch with higher IPC, like Apple's. But die area costs $.

    The sad reality (for the planet) is that the winning strategy seems to be making less-sophisticated cores that simply clock really high. That gets you winning headline numbers on lightly-threaded benchmarks. Then, wedge in some throughput-optimized cores, so you can also do well on heavily-threaded workloads.
  • Qasar - Sunday, April 3, 2022 - link

    just saw a review of the 12900ks, to me, when a reviewer types " the Core i9-12900KS represents Intel throwing value and power consumption out the window in a no-holds-barred attempt to retain the performance crown, particularly in gaming. " screams desperation. looks like one of the ways intel is only able to compete, is by doing this.
  • Mike Bruzzone - Tuesday, April 5, 2022 - link

    @Otritus, I agree MCMs trade off packaging cost for monolithic die fabrication cost. Moving to Systems in Package in the near term will eclipse their dice fabrication costs; "manufacturing, validation, and design costs". mb
  • Khanan - Thursday, April 7, 2022 - link

    Last time I checked 5950X had still more performance despite being challenged by a 250W auto OC 12900K. I mean, imagine, you can easily activate auto OC on 5950X with activating PBO too. And yep it crushes the 12900K then. Reviews that don’t reflect this are as per usual kinda trashy.
  • mode_13h - Wednesday, March 30, 2022 - link

    > They have 90-95% of the performance for less than half, or in some cases, 1/3rd the power.

    I don't see how you can draw that conclusion, when the article only lists *peak* power. What's needed is joules per test, for the fixed-size tests.

Log in

Don't have an account? Sign up now