CPU Performance: Rendering Tests

Rendering is often a key target for processor workloads, lending itself to a professional environment. It comes in different formats as well, from 3D rendering through rasterization, such as games, or by ray tracing, and invokes the ability of the software to manage meshes, textures, collisions, aliasing, physics (in animations), and discarding unnecessary work. Most renderers offer CPU code paths, while a few use GPUs and select environments use FPGAs or dedicated ASICs. For big studios however, CPUs are still the hardware of choice.

All of our benchmark results can also be found in our benchmark engine, Bench.

Corona 1.3: Performance Render

An advanced performance based renderer for software such as 3ds Max and Cinema 4D, the Corona benchmark renders a generated scene as a standard under its 1.3 software version. Normally the GUI implementation of the benchmark shows the scene being built, and allows the user to upload the result as a ‘time to complete’.

We got in contact with the developer who gave us a command line version of the benchmark that does a direct output of results. Rather than reporting time, we report the average number of rays per second across six runs, as the performance scaling of a result per unit time is typically visually easier to understand.

The Corona benchmark website can be found at https://corona-renderer.com/benchmark

Corona 1.3 Benchmark

When we apply a full-fat rendering test, the 9900K at 95W scores around the i7-9700K which is a similar CPU with no hyperthreading.

Blender 2.79b: 3D Creation Suite

A high profile rendering tool, Blender is open-source allowing for massive amounts of configurability, and is used by a number of high-profile animation studios worldwide. The organization recently released a Blender benchmark package, a couple of weeks after we had narrowed our Blender test for our new suite, however their test can take over an hour. For our results, we run one of the sub-tests in that suite through the command line - a standard ‘bmw27’ scene in CPU only mode, and measure the time to complete the render.

Blender can be downloaded at https://www.blender.org/download/

Blender 2.79b bmw27_cpu Benchmark

Similar scenes with Blender, where the 9900K at 95W is actually 50% slower, and performs around the mark of the 9700K.

LuxMark v3.1: LuxRender via Different Code Paths

As stated at the top, there are many different ways to process rendering data: CPU, GPU, Accelerator, and others. On top of that, there are many frameworks and APIs in which to program, depending on how the software will be used. LuxMark, a benchmark developed using the LuxRender engine, offers several different scenes and APIs.

In our test, we run the simple ‘Ball’ scene on both the C++ and OpenCL code paths, but in CPU mode. This scene starts with a rough render and slowly improves the quality over two minutes, giving a final result in what is essentially an average ‘kilorays per second’.

LuxMark v3.1 C++LuxMark v3.1 OpenCL

The drop in our Luxmark test isn't as severe as what we see in blender, but the 95W mode causes the 9900K to be again around the level of a 9700K.

POV-Ray 3.7.1: Ray Tracing

The Persistence of Vision ray tracing engine is another well-known benchmarking tool, which was in a state of relative hibernation until AMD released its Zen processors, to which suddenly both Intel and AMD were submitting code to the main branch of the open source project. For our test, we use the built-in benchmark for all-cores, called from the command line.

POV-Ray can be downloaded from http://www.povray.org/

POV-Ray 3.7.1 Benchmark

CPU Performance: System Tests CPU Performance: Office Tests
Comments Locked

101 Comments

View All Comments

  • DennisBaker - Tuesday, December 4, 2018 - link

    I wanted to build a new PC on Black Friday, and I bought an i9-9900k. I never overclock and typically buy a locked/non-k CPU but couldn't wait until next year. I also always use a SFF case (Cooler Master Elite 130).

    This is a great article, but I'm not sure how to actually set the bios for a 95w max cpu setting.
    I have the Asrock z390 phantom gaming-itx/ac motherboard:
    http://asrock.pc.cdn.bitgravity.com/Manual/Z390%20...

    I've been googling without success and figured I would just ask here if there is a general guide for this.
  • Targon - Thursday, November 29, 2018 - link

    The real question is real world performance. If the goal is a SFF machine where you don't have closed loop coolers, and you have a small ITX motherboard and a small case, what will happen to temperatures in those cases. That is where you get heat related issues with performance.

    We know that the 2700X hits 4.3GHz, 4.4GHz in some situations, but put it in an ITX case, benchmark it. Will the i9-9900k end up being all that much faster when you are pushing your machine, not just in games, but when you are using your system as an 8 core system where you have web browsers, mail, MS Word, plus other things open at the same time? With all of this running, then go to it with your benchmarks. Compare how well the 2700 and 2700X perform without overclocking and just use the defaults to allow boost/turbo to operate. Is the 9900k all that much faster when playing games with that other stuff still running in the background? Push it for an hour of nonstop use to make sure that you are seeing how well the chip will work in the real world(when used by enthusiasts).

    At that point, will we see the average CPU speed be 4GHz, or will it be down in the 3.6-3.7GHz range? Would the Ryzen chips at that point be faster in a SFF case than the i9-9900k?
  • HStewart - Thursday, November 29, 2018 - link

    Another factor here is that it not CPU that uses the power, one must also include the power consumption of the GPU which is a lot of time significantly more power than the CPU.

    But in normal peoples usage in real world - the cores are not running as much. It requires that the software to be designed multithreaded or multiple applications running at the same time - the major problem is video is often have to single threaded. In the real world every one is not a hard core gamer.

    One also remember that previous we had more desktop and all had external GPU's - but now with most of market - especially business market is mobile, the desired for high performance, high power system is not as important. So power savings modes is important to customers.

    This is not just important for PC's - just this morning, I got message on my Samsung Note 8 that my settings was causing my phone to use battery

    It really must be take in perspective of users needs - for hard core gamers - more cores, external GPUs are important. But for most users using Office and such, Internal GPU and dual core is fine.
  • BurntMyBacon - Thursday, November 29, 2018 - link

    @HStewart: "It really must be take in perspective of users needs - for hard core gamers - more cores, external GPUs are important. But for most users using Office and such, Internal GPU and dual core is fine."

    Which group of users that you defined do you suppose is the target audience for the i9-9900K tested in this article?
  • HStewart - Thursday, November 29, 2018 - link

    "Which group of users that you defined do you suppose is the target audience for the i9-9900K tested in this article?"

    Yes I realize that - but appears that people in this category tend to believe they are the only category. Also not all Hardcore Gamers are overclockers. I would say I done a lot of game in my life and even at 57 I still do. But all that time unless done by the manufacture, I have really not done it. I believe both my XPS 13 2in1 and XPS 15 2in1 have some built in over clocking but it is control by system.

    All I am saying is everyone does not over clock and hard core games.
  • Targon - Thursday, November 29, 2018 - link

    You don't need to manually overclock to enjoy the benefits of how long the processor can run at turbo speeds, vs. base speeds. If a chip can turbo to 5GHz all the time due to good cooling, then that will mean that even without manually overclocking, that CPU will have a much higher performance than lower tier chips. On the other hand, if the cooling isn't very good, then it will stay at base speeds most of the time.

    Small Form Factor....the beauty of having a small machine. If it also means that the performance will be limited due to cooling, then why bother paying for a faster processor when a slower processor will be almost as fast at half the price?

    What many want to see are real world situations. People do not buy a 9900k if they don't want high performance, even if they do not manually overclock. So, 8 core/16 thread, because why pay for that if 4 core/8 thread, or 6 core/12 thread will perform just as well if not better? Same case size, will the 9900k be faster than a Ryzen 7 2700X in the same SFF case if the 9900k can't be cooled well enough to keep the chip running faster than base speeds? What would you do if the 2700X, which doesn't bench as well, were actually better at holding turbo/boost speeds in a SFF environment? Do you expect a SFF machine to have a discrete video card(which Intel chips don't necessarily need, even if the people who buy a 9900k will almost always put one in)?

    Laptops are not the target of this article(no 9900k has ever been put into a laptop), so laptop boost/turbo results will be a bit more difficult because the design of the laptop itself won't allow a fair apples to apples comparison, unless you could swap the motherboards/processors while keeping the same motherboard/cooling.
  • HStewart - Thursday, November 29, 2018 - link

    I understand laptops are not target of this article - but some crazy laptop makes like to put desktop components into perverted laptop.

    Like it not this industry is moving away from desktop components and not just laptops - all and ones are perfect example. The closet thing that Apple has to desktop is iMacMini. In ways even servers are changing - blades are good example.

    As far as SFF concern mobile chips are idea for it - and a solution like EMiB is perfect for increase graphics performance - exceor the GPU in my Dell XPS 15 2in1 is just not inpar with NV|idia - Intel made a bad choice teaming up with AMD on it - don't get me wrong against iGPU - it is awesome and better than older generation NVidias like 860m
  • Manch - Friday, November 30, 2018 - link

    And there it is. I was wondering how you were going to steer towards bashing AMD. LOL
  • TheinsanegamerN - Thursday, November 29, 2018 - link

    When you are not using the iGPU, it is powergated off. It isnt using any power, or if it is, it is minute to the point where it doesnt matter.

    People have been saying, for years, that the iGPU was a detriment to OCing and power usage. The existence of HDET has proven that idea wrong many, many, many times over.
  • Icehawk - Thursday, November 29, 2018 - link

    How does HDET prove that an iGPU isn't a detriment to OCing or power usage? One might be able to argue that the dead silicon provides some sinking & surface area

Log in

Don't have an account? Sign up now