Reaching for Turbo: Aligning Perception with AMD’s Frequency Metrics
by Dr. Ian Cutress on September 17, 2019 10:00 AM ESTAMD Found An Issue, for +25-50 MHz
Of course, with Roman’s dataset hitting the internet with its results, a number of outlets reported on it and a lot of people were in a spin. It wasn’t long for AMD to have a response, issued in the form of a blog post. I’m going to take bits and pieces here from what is relevant, starting with the acknowledgement that a flaw was indeed found:
As we noted in this blog, we also resolved an issue in our BIOS that was reducing maximum boost frequency by 25-50MHz depending on workload. We expect our motherboard partners to make this update available as a patch in two to three weeks. Following the installation of the latest BIOS update, a consumer running a bursty, single threaded application on a PC with the latest software updates and adequate voltage and thermal headroom should see the maximum boost frequency of their processor.
AMD acknowledged that they had found a bug in their firmware that was reducing the maximum boost frequency of their CPUs by 25-50 MHz. If we take Roman’s data survey, adding 50 MHz to every value would push all the averages and modal values for each CPU above the turbo frequency. It wouldn’t necessarily help the users who were reporting 200-300 MHz lower frequencies, to which AMD had an answer there:
Achieving this maximum boost frequency, and the duration of time the processor sits at this maximum boost frequency, will vary from PC to PC based on many factors such as having adequate voltage and current headroom, the ambient temperature, installing the most up-to-date software and BIOS, and especially the application of thermal paste and the effectiveness of the system/processor cooling solution.
As we stated at the AMD Turbo section of this piece, the way that AMD implements its turbo is different, and it does monitor things like power delivery, voltage and current headroom, and will adjust the voltage/frequency based on the platform in use. AMD is reiterating this, as I expected they would have to.
AMD in the blog post mentioned how it had changed its firmware (1003AB) in August for system stability reasons, categorically denying that it was for CPU longevity reasons, saying that the latest firmware (1003ABBA) improves performance and does not affect longevity either.
The way AMD distributes its firmware is through AGESA (AMD Generic Encapsulated Software Architecture). The AGESA is essentially a base set of firmware and library files that gets distributed to motherboard vendors who then apply their own UEFI interfaces on top. The AGESA can also include updates for other parts of the system, such as the System Management Unit, that have their own firmware related to their operation. This can make updating things a bit annoying – motherboard vendors have been known to mix and match different firmware versions, because ultimately at the end of the day the user ends up with ‘BIOS F9’ or something similar.
AMD’s latest AGESA at the time of writing is 1003ABBA, which is going through motherboard vendors right now. MSI and GIGABYTE have already launched beta BIOS updates with the new AGESA, and should be pushing it through to stable versions shortly, as should be ASUS and ASRock.
Some media outlets have already tested this new firmware, and in almost all circumstances, are seeing a 25-50 MHz uplift in the way that the frequency was being reported. See the Tom’s Hardware article as a reference, but in general, reports are showing a 0.5-2.0% increase in performance in single thread turbo limited tests.
I Have a Ryzen 3000 CPU, Does It Affect Me?
The short answer is that if you are not overclocking, then yes. When your particular motherboard has a BIOS update for 1003ABBA, then it is advised to update. Note that updating a BIOS typically means that all BIOS settings are lost, so keep a track in case the DRAM needs XMP enabled or similar.
Users that are keeping their nose to the grindstone on the latest AMD BIOS developments should know the procedure.
The Future of Turbo
It would be at this point that I might make commentary that single thread frequency does not always equal performance. As part of the research for this article, I learned that some users believe that the turbo frequency listed on the box believe it is the all-core turbo frequency, which just goes to show that turbo still isn’t well understood in name alone. But as modern workloads move to multi-threaded environments with background processes, the amount of time spent in single-thread turbo is being reduced. Ultimately we’re ending up with a threading balance between background processes and immediate latency sensitive requirements.
At the end of the day, AMD identifying a 25-50 MHz deficit and fixing it is a good thing. The number of people for whom this is a critical boundary that enables a new workflow though, is zero. For all the media reports that drummed up AMD not hitting published turbo speeds as a big thing, most of those reporters ended up by contrast being very subdued with AMD’s fix. 2% on the single core turbo frequency hasn’t really changed anyone in this instance, despite all the fuss that was made.
I wrote this piece just to lay some cards on the table. The way AMD is approaching the concept of Turbo is very different to what most people are used to. The way AMD is binning its CPUs on a per-core basis is very different to what we’re used to. With all that in mind, peak turbo frequencies are not covered by warranty and are not guaranteed, despite the marketing material that goes into them. Users who find that a problem are encouraged to vote with their wallet in this instance.
Moving forward, I’m going to ask our motherboard editor, Gavin, to start tracking peak frequencies with our WSL tool. Because we’re defining the workload, our results might end up different to what users are seeing with their reporting tools while running CineBench or any other workload, but it can offer the purest result we can think of.
Ultimately the recommendations we made in our launch day Ryzen review still stand. If anything, if we had experienced some frequency loss, some extra MHz on the ST tests would push the parts slightly up the graph. Over time we will be retesting with the latest BIOS updates.
144 Comments
View All Comments
Dragonstongue - Tuesday, September 17, 2019 - link
ty so much for the pro article and summing in questionnot always do I come here and "want" to continue reading over...I keep to myself.
thankfully this was not such an article
o7
Iger - Thursday, September 19, 2019 - link
+1This mirrors my thoughts and feelings exactly.
mikato - Monday, September 23, 2019 - link
I completely agree. I had seen some of this with Hardware Unboxed, Gamers Nexus, Der8auer, Reddit. This was a great summary with solid explanation behind it, a more helpful way to learn about the whole issue.Now for the next issue - Hey, Ian is a doctor now, thinks he's better than all of us. Discuss... :)
(yes I knew he had a doctorate already)
azfacea - Tuesday, September 17, 2019 - link
death to intel LULPhynaz - Wednesday, September 18, 2019 - link
Drugs are bad for you, seek treatment.Smell This - Wednesday, September 18, 2019 - link
It sure is interesting that the **Chipzillah Propaganda Machine** has entered high gear/over-drive over the last several weeks after reports that the "Intel Apollo Lake CPUs May Die Sooner Than Expected" ...https://www.tomshardware.com/news/intel-apollo-lak...
Funny that, huh?
eastcoast_pete - Tuesday, September 17, 2019 - link
Thanks Ian, helpful article with good explanations!Question: The "binning by expected lifespan" caught my eye. Could you do another nice backgrounder on how overclocking affects lifespan? I believe many out there believe that there is such a thing as a free lunch. So, how fast does a CPU (or GPU) degrade if it gets pushed (overclocked and overvolted) to the still-usable limit. Maybe Ryan can chime in on the GPU aspect, especially the many "factory overclocked" cards. Thanks!
Ian Cutress - Tuesday, September 17, 2019 - link
I've been speaking to people about this to see if we can get a better understanding about manufacturing as it relates to expected product lifetimes and such. Overclocking would obviously be an extension to that. If something happens and we get some info, I'll write it up.igavus - Tuesday, September 17, 2019 - link
Aside from overclocking, it'd be interesting to know if the expected lifetime is optimized with warranty times and if what we're seeing is a step forward on the planned obsolescence path.It's sort of more important now than ever, because with 8 core being the new norm soon - we'll probably see even longer refresh cycles as workloads catch up to saturate the extra performance available. And limiting product lifetime would help curb those longer than profitable refresh cycles.
FunBunny2 - Tuesday, September 17, 2019 - link
"if what we're seeing is a step forward on the planned obsolescence path."you really, really should get this 57 Plymouth, cause your 56 Dodge has teeny tail fins.