CPU Performance: Rendering Tests

Rendering is often a key target for processor workloads, lending itself to a professional environment. It comes in different formats as well, from 3D rendering through rasterization, such as games, or by ray tracing, and invokes the ability of the software to manage meshes, textures, collisions, aliasing, physics (in animations), and discarding unnecessary work. Most renderers offer CPU code paths, while a few use GPUs and select environments use FPGAs or dedicated ASICs. For big studios however, CPUs are still the hardware of choice.

All of our benchmark results can also be found in our benchmark engine, Bench.

Corona 1.3: Performance Render

An advanced performance based renderer for software such as 3ds Max and Cinema 4D, the Corona benchmark renders a generated scene as a standard under its 1.3 software version. Normally the GUI implementation of the benchmark shows the scene being built, and allows the user to upload the result as a ‘time to complete’.

We got in contact with the developer who gave us a command line version of the benchmark that does a direct output of results. Rather than reporting time, we report the average number of rays per second across six runs, as the performance scaling of a result per unit time is typically visually easier to understand.

The Corona benchmark website can be found at https://corona-renderer.com/benchmark

Corona 1.3 Benchmark

Interestingly both 9900KS settings performed slightly worse than the 9900K here, which you wouldn't expect given the all-core turbo being higher. It would appear that there is something else the bottleneck in this test.

Blender 2.79b: 3D Creation Suite

A high profile rendering tool, Blender is open-source allowing for massive amounts of configurability, and is used by a number of high-profile animation studios worldwide. The organization recently released a Blender benchmark package, a couple of weeks after we had narrowed our Blender test for our new suite, however their test can take over an hour. For our results, we run one of the sub-tests in that suite through the command line - a standard ‘bmw27’ scene in CPU only mode, and measure the time to complete the render.

Blender can be downloaded at https://www.blender.org/download/

Blender 2.79b bmw27_cpu Benchmark

All the 9900 parts and settings perform roughly the same with one another, however the PL2 255W setting on the 9900KS does allow it to get a small ~5% advantage over the standard 9900K.

LuxMark v3.1: LuxRender via Different Code Paths

As stated at the top, there are many different ways to process rendering data: CPU, GPU, Accelerator, and others. On top of that, there are many frameworks and APIs in which to program, depending on how the software will be used. LuxMark, a benchmark developed using the LuxRender engine, offers several different scenes and APIs.

In our test, we run the simple ‘Ball’ scene on both the C++ and OpenCL code paths, but in CPU mode. This scene starts with a rough render and slowly improves the quality over two minutes, giving a final result in what is essentially an average ‘kilorays per second’.

LuxMark v3.1 C++

Both 9900KS settings perform equally well here, and a sizeable jump over the standard 9900K.

POV-Ray 3.7.1: Ray Tracing

The Persistence of Vision ray tracing engine is another well-known benchmarking tool, which was in a state of relative hibernation until AMD released its Zen processors, to which suddenly both Intel and AMD were submitting code to the main branch of the open source project. For our test, we use the built-in benchmark for all-cores, called from the command line.

POV-Ray can be downloaded from http://www.povray.org/

POV-Ray 3.7.1 Benchmark

One of the biggest differences between the two power settings is in POV-Ray, with a marked frequency difference. In fact, the 159W setting on the 9900KS puts it below our standard settings for the 9900K, which likely had an big default turbo budget on the board it was on at the time.

CPU Performance: System Tests CPU Performance: Encoding Tests
Comments Locked

235 Comments

View All Comments

  • Opencg - Thursday, October 31, 2019 - link

    People fail to consider other use cases. For competitive gaming or someone running 240hz 1080p with a high end gpu and willing to tweak settings to make their games cpu bound this is still the best cpu. Unfortunately not all testers optimize their cpu tests to be cpu bound in games. But if you look at the ones that do intel still poops on amd. Sure most gamers dont give a shit about fps above 160 or so but some do. When I ran overwatch I tweaked the config file and ran 400fps. If I was running csgo I would push the fps as high as possible as well.
    Also imo the biggest used case for amd cpus for gamers is futureproofing by having more cores. Most gamers are just gonna play their games with a few tabs open and maybe some music and discord running. Not everyone is running cpu based streaming encoding at the same time.
  • Galid - Thursday, October 31, 2019 - link

    Well I don't seem to notice the same thing you do for max fps in games where you need 240hz for example. At most, I can see 10 to 15 fps difference in counter strike at around 400fps. I looked around and found a lot of tests/benchmarks. There is no such thing as ''this is the best cpu and you'll notice a difference in the games that matters for competitive gaming''. I might be wrong, if so, enlighten me please. I'm about to buy a new gaming rig and like 99.98% of the population, I'm not a competitive gamer. I don'T consider streaming as competitive neither.

    But, in ubisoft's single player games, I noticed it does help to get closer to the 120hz at resolution and details that matters for these non-competitive games.
  • Galid - Thursday, October 31, 2019 - link

    BTW I compared ryzen 7 3700x and i9 9900k and got to the above conclusion.
  • eek2121 - Friday, November 1, 2019 - link

    Look at the 95th percentiles. Ignore average fps. AMD and Intel are virtually tied in nearly every game. I cannot believe we have reached this point. Finally after a decade, AMD is back in business.
  • evernessince - Friday, November 1, 2019 - link

    You do realize that running your CPU or GPU at 100% max utilization increases input lag correct? FPS isn't the only thing that matters. if the CPU cannot process new inputs in a timely matter because it's too busy with the GPU then the whole action of increasing your FPS was pointless. You should cap your FPS so that your neither your CPU nor GPU exceed 95% utilization. For the CPU this includes the core/cores that the game is running on. You loose maybe a handful of FPS by doing this but ensure consistent input lag.
  • CptnPenguin - Friday, November 1, 2019 - link

    Not sure how you managed that. The engine hard cap for Overwatch is 300 FPS.
  • eek2121 - Friday, November 1, 2019 - link

    Not true. AMD has the entire market pretty much cornered, though. So it doesn't matter whether you buy high end or mid range, Intel chips in general are a bad choice currently. Intel desperately needs to rethink their strategy going forward.
  • bji - Thursday, October 31, 2019 - link

    Well kudos for at least admitting that you are a blind fanboy early in your post.
  • Slash3 - Thursday, October 31, 2019 - link

    WCCFTech's comment section keeps leaking.
  • Sivar - Thursday, October 31, 2019 - link

    You might want to look at the benchmarks. Intel won most of them, with less cores.
    I was seriously considering an 8- or 12-core AMD, but Intel still ended up the better option for everything I do except video transcoding, in which AMD clearly wins.
    Other considerations: No cooling fan on the Intel motherboard, better Intel quality control and testing in general, more mature product (because the 9900 is an iteration of an iteration of an iteration...etc.)

Log in

Don't have an account? Sign up now