AVX-512 was disabled on the big CPUs because it's not available on the E cores and software expects all cores to have the same instructions. For the CPUs with no E cores I suspect that there is no reason to disable AVX-512.
Isn't that because AMD refuses to fabricate Zen3 multiple dies? A 6-core CCX would've gone a long way in these OEM markets.
Intel is frankly smartly using multiple dies, instead of Zen3's "two dies fits all". Smaller dies = cheaper fabrication = plenty of defects & able to fuse off "good" ones.
AMD chose the single-CCX path, so it'll bear the lower market share consequences, unfortunately. These APUs / OEM CPUs move massive volume and they're cheaper to fabricate to boot.
They should lower the price of 6-cores, at least. For quad-cores they have no answer unless they do something like put Van Gogh on desktop... for $80 or less.
Those wattages are a disaster. Intel has already lost and they’re just playing out the rest of the season while blustering. Just compare it for a second to M1‘s efficiency and even AMD’s last gen Ryzen 9. This is like AMD a decade ago.
AMD V5x average weighed price across the SKUs from TSMC is $165 and 5600x in the channel $199 although individually AMD may sell 5600X for $149 at highest volume. AMD never had a cost : price / margin advantage vis-a-vis Intel even when Intel was at 14/12 nm on TSMC mark up.
Now that Intel is at 10/7 or whatever it is AMD has lost all cost advantage whereas Intel design production cost is 1/3rd to 1/4 of TSMC price to AMD. AMD does however maintain its margin for price performance advantage at 5 nm.
Intel strategy is to trail one node behind AMD cost optimizing while AMD takes the design process factor cost increase on every next shrink. mb
Quasar, Production economic models on channel data and 10 Qs relying on the same tools AMD and Intel and Nvidia rely; the exact same tools that are Intel tools escaped into financial market decades ago . . . Intel actually provided them to financial analysts in 1997. The foundation technique is total revenue total cost analysis that backs out cost from price, data base of every Intel run since P5 marginal cost on marginal revenue is the deliverable see my Seeking Alpha blog spot for examples. Not guessing on the 24 year data base, tools and techniques. Mike Bruzzone
Which makes no sense since Apple is clearly beating Intel and funding TSMC, not AMD. Qualcomm also, AMD is kind of beside the point. Wintel is broken and the COVID business sales are the last gasp. Businesses and organizations are going to complete their cloud transitions soon, with MS’ help and then business devices will be anything, x86, ARM, Chromebook, Mac, iPad etc. We’re at a tipping point and so is Intel.
These 4.3 GHz boost quad-cores make for top-tier office performance. Even assuming a basic 15% IPC boost, it’s a 5 GHz equivalent Skylake CPU for any Joe or Jill, and much cooler, to boot. Bloody finally.
AMD, where are your Zen3 $120 APUs? Can we at least try to match—if not beat—4x Golden Cove @ 4.3 / 4.4 GHz?
Really glad Intel didn’t stick a crappy dual-core with some leftover efficiency cores. Desktops should always maintain high P-core counts.
Though I can’t wait for the excuses about a 2+8 laptop configuration. 🤢
To me, it's a swing and a miss, especially if they're destined for Windows boxes. I don't understand, how in 2022, Intel's 10 / 7 fabs can produce so many defective dies that only 1/3 of the CPU cores are usable and we have four SKUs worth of defects?
Some prior Celerons and Pentiums had quad-core SKUs, so I'd hope there's more ADL SKUs coming.
But I also acknowledge that, if it's Chrome OS or some embedded *nix box, perhaps then 2P is just manageable, but not ideal.
The last year has shown us that, in the absence of competition, AMD can be just as exploitative as Intel once was. Competition is needed to keep both of them honest, and it's nice to see that returning.
Certainly; but having said that, I wonder how much of a difference it would really make. If you're making the top product, you get to charge a top price. Back when there was Cyrix as well as AMD, Intel still charged to their heart's content. There are so many Android phones, too, but that doesn't stop the ever-benevolent Apple.
According to a history I just read, Cyrix was forced to use a 600nm node whilst Intel was selling 300nm parts. Being fabless that that time was apparently more deadly than it is now. Cyrix also didn’t market its chips in a clear nor catchy way. Cyrix had the misfortune of having to deal with Quake, which was optimized to exploit the Pentium’s design. Cyrix bet on integer and the rise of gaming focused more on FPU. Eventually, National Semiconductor inhaled the company and things became even worse, in terms of competitiveness with Intel. So, just having more than two companies around doesn’t guarantee adequate competition. VIA still holds an x86 license.
Having a duopoly, though, guarantees less competition than having three or more businesses that are run skillfully.
Quite right. Besides excellent design, Intel's first-rate manufacturing was part of its success. Then, the stigma of mediocrity clung to AMD and Cyrix, and Intel's extensive marketing didn't help these poor fellows at all. Pat, what part did you play in Intel Inside!
Cyrix's CPUs were generally behind, and while I never used one, I felt it was a shame they closed shop. I suspect if AMD hadn't got the NexGen and DEC engineers, they too would have gone down. The K5 had Pentium Pro-class design---out of order, register renaming, the works---but didn't perform very well; and AMD's FP performance, before the K7, was dismal. The Athlon put all that right. Had that celebrated CPU not lit the fire under Intel's bum, we can only imagine what generously-priced jewels Intel would've foisted on us all these years. Perhaps the Bulldozer era was only mild taste of it.
I had a cyrix 486 33 and a friend had a cyrix 6x86 and neither were very good in my opinion. Seemed to have some random compatibility issues with software. They were usable but I never considered them in any future purchase. AMD was a better option (no compatibility issues with the 486 120 or k6/k6-2 that I encountered) but the pentium and pentium 2 fpu was much better for quake and various other games.
Absolutely AMD needed NexGen or AMD would have seriously stalled out. Ofcourse 686 became K6 and 586 in relation K5 would have been a better option slapping on an x86 standard bus. mb Nexgen 1994/95 into AMD 1996
Apparently, the K6 was based on NexGen's design or built by those engineers. With enough L3 cache, the K6-III, especially, could go toe to toe with the Pentium III.
"Being fabless that that time was apparently more deadly than it is now" Up until the 14++ process, Intel was basically at the forefront of chip fabbing. And, in terms of transistor density and power, the 14nm process from Intel is still highly competitive to any other "10nm" process of anybody (including itself).
Cyrix was systematically infiltrated by Intel placements from the companies inception specifically engaged in Cyrix IP theft and transfer to Intel. Once one Intel associate network agent is in those people shoe in more of their brethren. In 1993 Cyrix FPU show up in P5. By 1994 the situation was hopeless. By the time Via bought Cyrix the design files are sabotaged and not worth the incredible time it would have taken to revalidate. mb Cyrix 1991 - 1993, IDT Centaur 1997 - 1998
Qusar, I am a former Cyrix employee recruited to steal by Intel agents at age 32 I basically ignored them which began the difficulty of now they know you know who they are. The FPU theft is well documented and eventually through litigation delivered to Cyrix Intel socket 370 license that was not sufficient compensation for the theft. The theft is documented by Kevin McDonough VP engineering pursuant litigation. I left Cyrix in 1993 and was back as an executive candidate in 1998 before the VIA acquisition where then CEO Stan Swearingen said the situation on the IP and other destruction left the enterprise hopeless. Glenn Henry at Centaur whom I was then employed wouldn't waste his time on revalidating Cyrix design files and was primarily interested in embedded low power and staying out of Intel's way. I first reported the thefts to FBI as an AMD employee on continued theft recruitment following Nexgen acquisition in 1996 and consulted to Intel on the topic in 1997 that began my journey as FTC v Intel Docket 9288 15 USC 5 discovery aid in 1998 parallel my involvement in California Department of Justice antitrust investigation through 2007 then onto FTC v Intel Docket 9341 I am currently Docket 9341 consent order monitor retained by Congress to recover Intel Inside price fix theft I suspect will occur later this year also contracted by USDOJ to do same for the United States federal government subject Title 48 procurement price fix overcharge theft. Mike Bruzzone
just some one that highly doubt you are who you say you are, after all this IS the internet, and for all intents and purposes, NO ONE can really prove who they say they are when they post stuff on platforms like a forum, or comments section. VERY good example im sure a few here will remember, was a poster named Deicidium369, that person kept insisting he is a millionaire with 2 or 3 doctorates, sold his 1st business for a few million at like 25, blah blah. the example of too good to be true
Fasmath 32bit FPU 1989 until it showed up in P5 transistor for transistor did more work than xx87 on the reliable benchmarks and used less power. i xx87 evolutionary improvements that followed I have no comparative data. I can give the single precision whetstone performance at the time;
Whet Scale
Cyrix = 1023 Intel = 673
Whet Mat
Cyrix = 528 Intel = 402
Whet Trans
Cyrix = 2052 Intel =628
Whet Stone
Cyrix = 3528 Intel = 2212
"Cyrix FasMath 83D87 outperform Intel 387DX is almost all of our tests from 7 t0 71%" PC Week Labs, January 1992." Before I was an employee at Cyrix I was a employee at Arche Technology an PC OEM where I was the AutoCAD segment manager. Arche sold a lot of FasMath into engineering, science and content creation graphic segments.
It is believed in some circles that Fairchild hired an engineer, David Chung (now deceased), to steal a CPU from Olympia Werke. That CPU became the Fairchild F8 and there was a lawsuit. Unfortunately for Olympia Werke, if it had indeed been stolen, the lawsuit took too long to match market conditions change. So, if it was stolen, Fairchild got away with getting a profit from the theft.
His colleague for the VideoBrain home computer project, who had worked at Intel, claimed he had invented the CPU (Chung, not the colleague), which is why it was chosen for the VideoBrain rather than an Intel part. This colleague, whose name escapes me at the moment, said hindsight makes it clear the machine should have had an Intel CPU.
My opinion on that is that it wasn’t the CPU that doomed that machine. It was having only 1K of RAM, a totally non-standard keyboard, and its reliance upon the APL/S language. It also had no floppy drive support. Distribution was also inadequate. Trying to sell it via Macy’s was a bad move. It should have been carried and pushed by electronics stores. Regardless of everything else, 1K of RAM wasn’t enough for anything.
I thank Mike for the info. The info he mentions is not new speculation, it appears to be based on facts previously discussed by him. https://youtu.be/VL1RjwVAnzY
Tygrus, you're welcome. I've conferred with Tom from MLID on an update broadcast sought by members of Seeking Alpha investor chat room but nothing scheduled yet. mb
Then AMD took all the money they made and sunk it not into CPU improvements but buying ATi and building globalfoundries, which is credited with saving them but is also directly responsible for AMD being put into such a situation in the first place.
Agreed. I think it was a matter of resting on laurels, too, and underestimating Intel. K10 was only a mild improvement over the Athlon 64, certainly not enough to tackle Conroe, which they likely never expected. Still, I expect if AMD had kept working on K10, they'd have drawn even with Intel; but desperation, or foolish inspiration, took them down the dark, disastrous path of Bulldozer.
TheinsanegamerN, looks like you are also forgetting about the BS intel was doing with the back door deals, threats and the like that prevented amd from doing better with the A64 then they did.
You forgot the astroturfers — people using the ‘team green, team red’ nonsense to try to turn everything into tribal warfare. ‘Fanboyism’ has been a strong component of the enthusiast gaming/overclocking scene for a long time, even though plenty of it is due to sundry astroturfers.
Even in 2005, the Athlon 64 heyday, I remember an editor of NAG, a gaming magazine in South Africa, saying, "Do not stray to the dark side, brother. Trust in Intel."
"Intel’s big win here for this generation is combining both the CPU and the chipset onto one package for its 45 W processors, rather than relying on a mobile chipset."
That's an extremely generous interpretation. These are just traditional Type-3 packages with PCH-LP but rebranded as they were pushed past 28W TDP. Remember Tiger Lake H35? Well now Alder Lake has H45. The traditional two-chip platform based on the HP CPU die using the S-BGA package will be H55.
The top models of 4x4 NUCs use 28 W U-series CPUs, frequently configured by Intel at slightly higher power limits.
For example I have a NUC with a Coffee Lake U CPU, which has a nominal TDP of 28 W, but the Intel BIOS configures its steady-state power limit at 30 W (and at 50 W for the first half-minute).
At continuous 30 W dissipation, my NUC remains almost silent, but this is a 2018 model and I have heard that in the latest NUC models Intel has replaced the good coolers from the 2018 models with some cheaper coolers, which are much noisier at maximum power.
Nevertheless, in the traditional 4x4 size, 28 W CPUs can be used without problems.
Starting with the Skylake H-based Skull Canyon NUC, Intel has also introduced a series of bigger NUCs with 45 W H-series CPUs.
In conclusion, the answer is that it is likely that there will be NUC models with all of them, i.e. U & P in 4x4 models and H in larger sizes.
Isn't a wide range of power states an achievement of modern semiconductors? Obviously you want to use less power at the same performance, but isn't the ability to operate at higher power draw for increased performance a desirable thing?
The E cores do offer something to gamers, by relieving the P cores of most of the background tasks. They leave 100% of the performance of P-cores dedicated to the game and just the game.
That's why 1% FPS lows are so much worse with e cores enabled, right? Because its for "gamers"?
The "e" cores are a total ripoff. Innefficient, slow, latent wastes of solicon that dont help keep the power usage down, as 12400f reviews have shown it manages perfectly fine with 0 e cores. the "e" cores were all show to steal some benchmarks from AMD.
Not that easy otherwise T would look like a miracle. Anyway, I guess turbo consumption is a hard limit and once hit by CPU, it starts throttling down. In T case probably quite rapidly so final freq. difference will be way higher than just 0.3GHz.
Still don't understand why Intel put so many graphics cores in the mobile H series when they are clearly going to have a dedicated GPU. Why waste so much space on the die? Should have been 32-48EUs all round. Seems like a bad design decision. People won't game on the integrated graphics on most of the designs.
Because ther eis demand from those of us who want more powerful graphics but not all the headaces that come with dGPU designs. This has been going on since the days of the iris pro, its not like this is something new.
Intel "Deep-Link". Arc dGPU and Xe iGPU can be used simultaneously on productivity workloads - Intel claims up to 40% performance boost than just the Arc dGPU alone.
In the current state of things, a gaming discrete gpu laptop on battery could last another extra hour if it disables the dGPU and runs on integrated. And it's not the difference from 10 to 11 hours, it's the difference from 3 to 4.
I have a small room and it gets hot in the summer with an 8700 and a 3070 running any type of loads. Any view on the T-series (i7-12700T especially) as a potential way to reduce ambient heat during loads? Don't have any bleeding edge needs but just want my programs and games to run faster.
You could undervolt and underclock your existing CPU and GPU. Many GPUs are factory clocked to use nearly double the power to get the last 10-20% of performance
This. Frame rate limits also make a world of difference in some games.
I have the 8700 (non-K) because I wanted lower power, but it misbehaved with default BIOS. Insane 1.4v for pointless +100 MHz turbo clocks.
Set each core to 4.3 GHz and 1.15v. Try disabling hyper threading, which helps some games. Those two dropped the temperature like a rock for me.
Check the ring clock speed, mostly to be sure it isn't too low. Some motherboards default to 4 GHz, but others are 3.7 GHz.
RAM near 3200 with lower than standard timings should not take any effort to run stable. I tried mine with an XMP profile for 3600 CAS14 and I did not have to adjust any BIOS settings.
Even with a 35 watt CPU, it's either going to ignore power limits by default, or enforcing it will behave like a laptop, gyrating between ~4 GHz and ~2 GHz when the turbo timer runs out.
Completely agree. Set fps limits to a little above whatever your monitor can handle on the GPu side. On the cpu side you can undervolt or get a lower power cpu. I just suspect saving 30w on the cpu side isn't your main issue though and will be much less meaningful than power savings on the gpu side.
Dr. Cutress on some data I provided and shared today establishes You Tube (tech tube) subscription base means nothing on impression and cost per impression. You Tube has since removed [?] Dr. Cutress broadcast on the topic chosen for top broadcaster Where did it go?
And on the dGPU add in card front - There are good Nvidia and AMD dGPU deals if you buy board house AIB web site direct. You may have to get in line but you will not be ripped off on wholesale and retail marks ups. Buying AIB direct is the Nvidia and AMD strategy to take the channel back seeking N6x and 30x0 inflated prices and channel and retail using AMD and Nvidia margin to buy Alder Lake for resale selling out AMD and Nvidia lifting their product sales margin denied them then spent by channel and retail to Intel for Intel surplus loss cost offset and to purchase Alder Lake. You've heard it from me before, AMD and Nvidia inflated GPU price is being relied by channel to offset Intel CPU surplus inventory loses and retail to buy Alder lake! no wonder there are not sufficient dGPU supply at retail priced right a lot of retail outlets r limiting AMD and Nvidia purchases to use AMD and Nvidia margin gain that should have gone to AMD and Nvidia restocking spent on Intel gets caught, BUY AMD and NVIDIA AIB WEB SITE DIRECT. mb
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
101 Comments
Back to Article
Jon Tseng - Tuesday, January 4, 2022 - link
Weird they not announcing the P and U-series. I mean Dell just announced an XPS with the 28W P-core. Tomorrow maybe?code65536 - Tuesday, January 4, 2022 - link
The article at Ars Technica has slides for the P, U-15W, and U-9W parts, so they were definitely announced. Not sure why it wasn't mentioned here.TristanSDX - Tuesday, January 4, 2022 - link
what with AVX-512 support. Seems that even i3 support ithttps://voz.vn/attachments/ee22bc74-ea69-45e7-8907...
Oxford Guy - Tuesday, January 4, 2022 - link
Just read an article yesterday that said Intel is blocking AVX-512 via BIOS updates.TristanSDX - Tuesday, January 4, 2022 - link
Intel statement is one, reality another. Rumors about blocking AVX-512 are unworthyOxford Guy - Tuesday, January 4, 2022 - link
As I recall, the article did not present it as being a rumour. It was presented as simple fact.Bottom line is that Intel's behavior is unjustifiable in this matter.
Yojimbo - Tuesday, January 4, 2022 - link
I'm sure we can just blindly trust the article since it was not stated as rumor...Oxford Guy - Wednesday, January 5, 2022 - link
Considering how many established tech sites reported the same thing I would say so.The Von Matrices - Tuesday, January 4, 2022 - link
AVX-512 was disabled on the big CPUs because it's not available on the E cores and software expects all cores to have the same instructions. For the CPUs with no E cores I suspect that there is no reason to disable AVX-512.incx - Wednesday, January 5, 2022 - link
No reason for the king of segmentation to disable features on lower end CPUs? You were being sarcastic, right? Right?m53 - Tuesday, January 4, 2022 - link
$167 12400F + cheap B660 + free RM1 cooler = insane perf per dollar. The 5600X should be sub $150 to be competitive with this.shabby - Tuesday, January 4, 2022 - link
Almost seems like amd doesn't have enough rejects to sell more neutered cores.ikjadoon - Tuesday, January 4, 2022 - link
Isn't that because AMD refuses to fabricate Zen3 multiple dies? A 6-core CCX would've gone a long way in these OEM markets.Intel is frankly smartly using multiple dies, instead of Zen3's "two dies fits all". Smaller dies = cheaper fabrication = plenty of defects & able to fuse off "good" ones.
AMD chose the single-CCX path, so it'll bear the lower market share consequences, unfortunately. These APUs / OEM CPUs move massive volume and they're cheaper to fabricate to boot.
Pneumothorax - Wednesday, January 5, 2022 - link
I think it's more to do with limited TSMC allocation for AMD rather than their own choice to abandon the low end.nandnandnand - Thursday, January 6, 2022 - link
They should lower the price of 6-cores, at least. For quad-cores they have no answer unless they do something like put Van Gogh on desktop... for $80 or less.throAU - Tuesday, January 4, 2022 - link
Ryzen 3000 can handle the low end. Make no mistake, Zen2 is still fast for the money.AhsanX - Wednesday, January 5, 2022 - link
Lol it will get trounced by ADLnico_mach - Sunday, January 16, 2022 - link
Those wattages are a disaster. Intel has already lost and they’re just playing out the rest of the season while blustering. Just compare it for a second to M1‘s efficiency and even AMD’s last gen Ryzen 9. This is like AMD a decade ago.Mike Bruzzone - Tuesday, January 11, 2022 - link
AMD V5x average weighed price across the SKUs from TSMC is $165 and 5600x in the channel $199 although individually AMD may sell 5600X for $149 at highest volume. AMD never had a cost : price / margin advantage vis-a-vis Intel even when Intel was at 14/12 nm on TSMC mark up.Now that Intel is at 10/7 or whatever it is AMD has lost all cost advantage whereas Intel design production cost is 1/3rd to 1/4 of TSMC price to AMD. AMD does however maintain its margin for price performance advantage at 5 nm.
Intel strategy is to trail one node behind AMD cost optimizing while AMD takes the design process factor cost increase on every next shrink. mb
Qasar - Tuesday, January 11, 2022 - link
any proof of this, or are you just guessing ?Mike Bruzzone - Wednesday, January 12, 2022 - link
Quasar, Production economic models on channel data and 10 Qs relying on the same tools AMD and Intel and Nvidia rely; the exact same tools that are Intel tools escaped into financial market decades ago . . . Intel actually provided them to financial analysts in 1997. The foundation technique is total revenue total cost analysis that backs out cost from price, data base of every Intel run since P5 marginal cost on marginal revenue is the deliverable see my Seeking Alpha blog spot for examples. Not guessing on the 24 year data base, tools and techniques. Mike BruzzoneQasar - Wednesday, January 12, 2022 - link
and that proves what ? sounds like you made it up, how about a link or a source ?Mike Bruzzone - Thursday, January 13, 2022 - link
Qaser, go at it . . .https://seekingalpha.com/user/5030701/instablogs
. . . decades of case research enlisted by Federal Trade Commission retained by Congress of the United States.
mb
nico_mach - Sunday, January 16, 2022 - link
Which makes no sense since Apple is clearly beating Intel and funding TSMC, not AMD. Qualcomm also, AMD is kind of beside the point. Wintel is broken and the COVID business sales are the last gasp. Businesses and organizations are going to complete their cloud transitions soon, with MS’ help and then business devices will be anything, x86, ARM, Chromebook, Mac, iPad etc. We’re at a tipping point and so is Intel.Qasar - Sunday, January 16, 2022 - link
see ,mike brahzzone, was that so hard ?? thanks.Qasar - Sunday, January 16, 2022 - link
oh wait, thats to " your " page, NOT an official source, and looks like all links are on seeking alpha, nice try though.what ever
ikjadoon - Tuesday, January 4, 2022 - link
> i3-12100These 4.3 GHz boost quad-cores make for top-tier office performance. Even assuming a basic 15% IPC boost, it’s a 5 GHz equivalent Skylake CPU for any Joe or Jill, and much cooler, to boot. Bloody finally.
AMD, where are your Zen3 $120 APUs? Can we at least try to match—if not beat—4x Golden Cove @ 4.3 / 4.4 GHz?
Really glad Intel didn’t stick a crappy dual-core with some leftover efficiency cores. Desktops should always maintain high P-core counts.
Though I can’t wait for the excuses about a 2+8 laptop configuration. 🤢
nandnandnand - Tuesday, January 4, 2022 - link
"Really glad Intel didn’t stick a crappy dual-core with some leftover efficiency cores. Desktops should always maintain high P-core counts."So with that statement in mind, what do you have to say about the Alder Lake Pentium G7400 and Celeron G6900 with two cores?
ikjadoon - Tuesday, January 4, 2022 - link
To me, it's a swing and a miss, especially if they're destined for Windows boxes. I don't understand, how in 2022, Intel's 10 / 7 fabs can produce so many defective dies that only 1/3 of the CPU cores are usable and we have four SKUs worth of defects?Some prior Celerons and Pentiums had quad-core SKUs, so I'd hope there's more ADL SKUs coming.
But I also acknowledge that, if it's Chrome OS or some embedded *nix box, perhaps then 2P is just manageable, but not ideal.
TheinsanegamerN - Wednesday, January 5, 2022 - link
Nobody said that there were tons of defective dies. Nowhere was it stated how much manufacturing time the pentiums would get Vs the i lineup.you might be overreacting just a tad. Was 14nm a swing and a miss too? They had MULTIPLE pentium and celeron models per generation! So did 22nm!
code65536 - Tuesday, January 4, 2022 - link
The last year has shown us that, in the absence of competition, AMD can be just as exploitative as Intel once was. Competition is needed to keep both of them honest, and it's nice to see that returning.Oxford Guy - Tuesday, January 4, 2022 - link
A duopoly is barely competition.ufoolme - Tuesday, January 4, 2022 - link
ARM coming in from the rear … Intel really should have given Apple those x86 chips for the iPhone!GeoffreyA - Thursday, January 6, 2022 - link
Certainly; but having said that, I wonder how much of a difference it would really make. If you're making the top product, you get to charge a top price. Back when there was Cyrix as well as AMD, Intel still charged to their heart's content. There are so many Android phones, too, but that doesn't stop the ever-benevolent Apple.Oxford Guy - Friday, January 7, 2022 - link
According to a history I just read, Cyrix was forced to use a 600nm node whilst Intel was selling 300nm parts. Being fabless that that time was apparently more deadly than it is now. Cyrix also didn’t market its chips in a clear nor catchy way. Cyrix had the misfortune of having to deal with Quake, which was optimized to exploit the Pentium’s design. Cyrix bet on integer and the rise of gaming focused more on FPU. Eventually, National Semiconductor inhaled the company and things became even worse, in terms of competitiveness with Intel. So, just having more than two companies around doesn’t guarantee adequate competition. VIA still holds an x86 license.Having a duopoly, though, guarantees less competition than having three or more businesses that are run skillfully.
GeoffreyA - Friday, January 7, 2022 - link
Quite right. Besides excellent design, Intel's first-rate manufacturing was part of its success. Then, the stigma of mediocrity clung to AMD and Cyrix, and Intel's extensive marketing didn't help these poor fellows at all. Pat, what part did you play in Intel Inside!Cyrix's CPUs were generally behind, and while I never used one, I felt it was a shame they closed shop. I suspect if AMD hadn't got the NexGen and DEC engineers, they too would have gone down. The K5 had Pentium Pro-class design---out of order, register renaming, the works---but didn't perform very well; and AMD's FP performance, before the K7, was dismal. The Athlon put all that right. Had that celebrated CPU not lit the fire under Intel's bum, we can only imagine what generously-priced jewels Intel would've foisted on us all these years. Perhaps the Bulldozer era was only mild taste of it.
andrewaggb - Tuesday, January 11, 2022 - link
I had a cyrix 486 33 and a friend had a cyrix 6x86 and neither were very good in my opinion. Seemed to have some random compatibility issues with software. They were usable but I never considered them in any future purchase. AMD was a better option (no compatibility issues with the 486 120 or k6/k6-2 that I encountered) but the pentium and pentium 2 fpu was much better for quake and various other games.GeoffreyA - Friday, January 14, 2022 - link
I remember my grandfather's K6-2 350 with Windows ME. Was reasonably fast.Mike Bruzzone - Tuesday, January 11, 2022 - link
Absolutely AMD needed NexGen or AMD would have seriously stalled out. Ofcourse 686 became K6 and 586 in relation K5 would have been a better option slapping on an x86 standard bus. mb Nexgen 1994/95 into AMD 1996GeoffreyA - Friday, January 14, 2022 - link
Apparently, the K6 was based on NexGen's design or built by those engineers. With enough L3 cache, the K6-III, especially, could go toe to toe with the Pentium III.Calin - Friday, January 7, 2022 - link
"Being fabless that that time was apparently more deadly than it is now"Up until the 14++ process, Intel was basically at the forefront of chip fabbing. And, in terms of transistor density and power, the 14nm process from Intel is still highly competitive to any other "10nm" process of anybody (including itself).
Mike Bruzzone - Tuesday, January 11, 2022 - link
Cyrix was systematically infiltrated by Intel placements from the companies inception specifically engaged in Cyrix IP theft and transfer to Intel. Once one Intel associate network agent is in those people shoe in more of their brethren. In 1993 Cyrix FPU show up in P5. By 1994 the situation was hopeless. By the time Via bought Cyrix the design files are sabotaged and not worth the incredible time it would have taken to revalidate. mb Cyrix 1991 - 1993, IDT Centaur 1997 - 1998Qasar - Tuesday, January 11, 2022 - link
more specualion? or can prove it ?Mike Bruzzone - Wednesday, January 12, 2022 - link
Qusar, I am a former Cyrix employee recruited to steal by Intel agents at age 32 I basically ignored them which began the difficulty of now they know you know who they are. The FPU theft is well documented and eventually through litigation delivered to Cyrix Intel socket 370 license that was not sufficient compensation for the theft. The theft is documented by Kevin McDonough VP engineering pursuant litigation. I left Cyrix in 1993 and was back as an executive candidate in 1998 before the VIA acquisition where then CEO Stan Swearingen said the situation on the IP and other destruction left the enterprise hopeless. Glenn Henry at Centaur whom I was then employed wouldn't waste his time on revalidating Cyrix design files and was primarily interested in embedded low power and staying out of Intel's way. I first reported the thefts to FBI as an AMD employee on continued theft recruitment following Nexgen acquisition in 1996 and consulted to Intel on the topic in 1997 that began my journey as FTC v Intel Docket 9288 15 USC 5 discovery aid in 1998 parallel my involvement in California Department of Justice antitrust investigation through 2007 then onto FTC v Intel Docket 9341 I am currently Docket 9341 consent order monitor retained by Congress to recover Intel Inside price fix theft I suspect will occur later this year also contracted by USDOJ to do same for the United States federal government subject Title 48 procurement price fix overcharge theft. Mike BruzzoneQasar - Wednesday, January 12, 2022 - link
yea sure you are. and am to believe some random person who JUST started posting on there what looks like recently ? yea ok. sure.Mike Bruzzone - Thursday, January 13, 2022 - link
Qaser, and who are you may I ask? Do the research and learn. Thank you for the inquiring mind. mbQasar - Friday, January 14, 2022 - link
just some one that highly doubt you are who you say you are, after all this IS the internet, and for all intents and purposes, NO ONE can really prove who they say they are when they post stuff on platforms like a forum, or comments section. VERY good example im sure a few here will remember, was a poster named Deicidium369, that person kept insisting he is a millionaire with 2 or 3 doctorates, sold his 1st business for a few million at like 25, blah blah. the example of too good to be trueTheinsanegamerN - Friday, January 14, 2022 - link
The FPU theft? Why would intel need to steal FPU tech from cyriz when cyrix's FPU tech was already years behind intels?Mike Bruzzone - Friday, January 14, 2022 - link
Fasmath 32bit FPU 1989 until it showed up in P5 transistor for transistor did more work than xx87 on the reliable benchmarks and used less power. i xx87 evolutionary improvements that followed I have no comparative data. I can give the single precision whetstone performance at the time;Whet Scale
Cyrix = 1023
Intel = 673
Whet Mat
Cyrix = 528
Intel = 402
Whet Trans
Cyrix = 2052
Intel =628
Whet Stone
Cyrix = 3528
Intel = 2212
"Cyrix FasMath 83D87 outperform Intel 387DX is almost all of our tests from 7 t0 71%" PC Week Labs, January 1992." Before I was an employee at Cyrix I was a employee at Arche Technology an PC OEM where I was the AutoCAD segment manager. Arche sold a lot of FasMath into engineering, science and content creation graphic segments.
mb
mb
Qasar - Friday, January 14, 2022 - link
and where is this from ?? post a link, maybe?Mike Bruzzone - Wednesday, January 12, 2022 - link
Its a rough business.mbOxford Guy - Friday, January 14, 2022 - link
It is believed in some circles that Fairchild hired an engineer, David Chung (now deceased), to steal a CPU from Olympia Werke. That CPU became the Fairchild F8 and there was a lawsuit. Unfortunately for Olympia Werke, if it had indeed been stolen, the lawsuit took too long to match market conditions change. So, if it was stolen, Fairchild got away with getting a profit from the theft.His colleague for the VideoBrain home computer project, who had worked at Intel, claimed he had invented the CPU (Chung, not the colleague), which is why it was chosen for the VideoBrain rather than an Intel part. This colleague, whose name escapes me at the moment, said hindsight makes it clear the machine should have had an Intel CPU.
My opinion on that is that it wasn’t the CPU that doomed that machine. It was having only 1K of RAM, a totally non-standard keyboard, and its reliance upon the APL/S language. It also had no floppy drive support. Distribution was also inadequate. Trying to sell it via Macy’s was a bad move. It should have been carried and pushed by electronics stores. Regardless of everything else, 1K of RAM wasn’t enough for anything.
tygrus - Friday, January 14, 2022 - link
I thank Mike for the info. The info he mentions is not new speculation, it appears to be based on facts previously discussed by him. https://youtu.be/VL1RjwVAnzYMike Bruzzone - Friday, January 14, 2022 - link
Tygrus, you're welcome. I've conferred with Tom from MLID on an update broadcast sought by members of Seeking Alpha investor chat room but nothing scheduled yet. mbTheinsanegamerN - Wednesday, January 5, 2022 - link
Newsflash: AMD has never been your friend, nobody but the biggest corprate suckers believed otherwise.See, some of us are old enough to remember the FX-62 selling for $1000, or $300 more then the 100 mhz slower FX-60.
GeoffreyA - Wednesday, January 5, 2022 - link
As soon as the Athlon 64 took the lead, prices went up.TheinsanegamerN - Friday, January 14, 2022 - link
Then AMD took all the money they made and sunk it not into CPU improvements but buying ATi and building globalfoundries, which is credited with saving them but is also directly responsible for AMD being put into such a situation in the first place.GeoffreyA - Friday, January 14, 2022 - link
Agreed. I think it was a matter of resting on laurels, too, and underestimating Intel. K10 was only a mild improvement over the Athlon 64, certainly not enough to tackle Conroe, which they likely never expected. Still, I expect if AMD had kept working on K10, they'd have drawn even with Intel; but desperation, or foolish inspiration, took them down the dark, disastrous path of Bulldozer.Qasar - Friday, January 14, 2022 - link
TheinsanegamerN, looks like you are also forgetting about the BS intel was doing with the back door deals, threats and the like that prevented amd from doing better with the A64 then they did.Oxford Guy - Friday, January 7, 2022 - link
You forgot the astroturfers — people using the ‘team green, team red’ nonsense to try to turn everything into tribal warfare. ‘Fanboyism’ has been a strong component of the enthusiast gaming/overclocking scene for a long time, even though plenty of it is due to sundry astroturfers.GeoffreyA - Friday, January 14, 2022 - link
Even in 2005, the Athlon 64 heyday, I remember an editor of NAG, a gaming magazine in South Africa, saying, "Do not stray to the dark side, brother. Trust in Intel."Oxford Guy - Friday, January 14, 2022 - link
The FX 9000 series was as crass as crass can be. There is a long laundry list of AMD’s unfriendly moves and it continues to grow.Lezmaka - Tuesday, January 4, 2022 - link
Did you not get the same slides as other sites? Ars has a slide that has a lot more info.https://cdn.arstechnica.net/wp-content/uploads/202...
john_lam - Tuesday, January 4, 2022 - link
Did they announce what process node these are fabricated on? TSMC? Intel7? Something else?IntelUser2000 - Tuesday, January 4, 2022 - link
It's Alderlake. Intel 7.shabby - Tuesday, January 4, 2022 - link
10nmrepoman27 - Tuesday, January 4, 2022 - link
"Intel’s big win here for this generation is combining both the CPU and the chipset onto one package for its 45 W processors, rather than relying on a mobile chipset."That's an extremely generous interpretation. These are just traditional Type-3 packages with PCH-LP but rebranded as they were pushed past 28W TDP. Remember Tiger Lake H35? Well now Alder Lake has H45. The traditional two-chip platform based on the HP CPU die using the S-BGA package will be H55.
Oxford Guy - Tuesday, January 4, 2022 - link
Only 22?beisat - Tuesday, January 4, 2022 - link
Which ones typically go in the "standart" NUCs? H, P or U?AdrianBc - Tuesday, January 4, 2022 - link
Most traditional 4x4 NUCs use 15 W U-series CPUs.The top models of 4x4 NUCs use 28 W U-series CPUs, frequently configured by Intel at slightly higher power limits.
For example I have a NUC with a Coffee Lake U CPU, which has a nominal TDP of 28 W, but the Intel BIOS configures its steady-state power limit at 30 W (and at 50 W for the first half-minute).
At continuous 30 W dissipation, my NUC remains almost silent, but this is a 2018 model and I have heard that in the latest NUC models Intel has replaced the good coolers from the 2018 models with some cheaper coolers, which are much noisier at maximum power.
Nevertheless, in the traditional 4x4 size, 28 W CPUs can be used without problems.
Starting with the Skylake H-based Skull Canyon NUC, Intel has also introduced a series of bigger NUCs with 45 W H-series CPUs.
In conclusion, the answer is that it is likely that there will be NUC models with all of them, i.e. U & P in 4x4 models and H in larger sizes.
superflex - Tuesday, January 4, 2022 - link
I see we're back to the Pentium 4 levels of TDPYojimbo - Tuesday, January 4, 2022 - link
Isn't a wide range of power states an achievement of modern semiconductors? Obviously you want to use less power at the same performance, but isn't the ability to operate at higher power draw for increased performance a desirable thing?TheinsanegamerN - Wednesday, January 5, 2022 - link
No, see, when intel has high TDP, it's bad, when AMD has it, it doesnt matter anymore.Hixbot - Tuesday, January 4, 2022 - link
I was hoping to see a 8p+0e i5. I don't think the e cores offer much to gamers and 6p cores are not enough.TheinsanegamerN - Wednesday, January 5, 2022 - link
They wouldnt do that, since then you'd have the i9, i7, and i5 ALL with 8 p cores.ZoZo - Sunday, January 9, 2022 - link
The E cores do offer something to gamers, by relieving the P cores of most of the background tasks. They leave 100% of the performance of P-cores dedicated to the game and just the game.TheinsanegamerN - Friday, January 14, 2022 - link
That's why 1% FPS lows are so much worse with e cores enabled, right? Because its for "gamers"?The "e" cores are a total ripoff. Innefficient, slow, latent wastes of solicon that dont help keep the power usage down, as 12400f reviews have shown it manages perfectly fine with 0 e cores. the "e" cores were all show to steal some benchmarks from AMD.
throAU - Tuesday, January 4, 2022 - link
Desktop i9-12900K vs. i9-12900T. an extra 300mhz boost for an extra ~140 watts. ouch. I'd take the T no contest.kgardas - Wednesday, January 5, 2022 - link
Not that easy otherwise T would look like a miracle. Anyway, I guess turbo consumption is a hard limit and once hit by CPU, it starts throttling down. In T case probably quite rapidly so final freq. difference will be way higher than just 0.3GHz.TheinsanegamerN - Wednesday, January 5, 2022 - link
It'll throttle down a lot harder on K series too if you actually enforce power limits.isthisavailable - Tuesday, January 4, 2022 - link
So is the Xe GPU still the exact same as 11th gen laptop processors?MDD1963 - Wednesday, January 5, 2022 - link
As the 12600 has no 'E' cores, I'll just download some! :)GeoffreyA - Friday, January 14, 2022 - link
Virtual cores. Now, that's a thought!tkSteveFOX - Wednesday, January 5, 2022 - link
Still don't understand why Intel put so many graphics cores in the mobile H series when they are clearly going to have a dedicated GPU. Why waste so much space on the die? Should have been 32-48EUs all round. Seems like a bad design decision. People won't game on the integrated graphics on most of the designs.t.s - Wednesday, January 5, 2022 - link
1. I game on intel igpu.2. if I buy intel 12th laptop, i'll get H series without dgpu.
TheinsanegamerN - Wednesday, January 5, 2022 - link
Because ther eis demand from those of us who want more powerful graphics but not all the headaces that come with dGPU designs. This has been going on since the days of the iris pro, its not like this is something new.kwohlt - Wednesday, January 5, 2022 - link
Intel "Deep-Link". Arc dGPU and Xe iGPU can be used simultaneously on productivity workloads - Intel claims up to 40% performance boost than just the Arc dGPU alone.eastcoast_pete - Wednesday, January 5, 2022 - link
Maybe for those of us who don't want to have an Nvidia space heater in our ultraportable for some occasional light gaming?Calin - Friday, January 7, 2022 - link
In the current state of things, a gaming discrete gpu laptop on battery could last another extra hour if it disables the dGPU and runs on integrated.And it's not the difference from 10 to 11 hours, it's the difference from 3 to 4.
TheinsanegamerN - Friday, January 14, 2022 - link
That's pretty outdated, gaming laptops with dGPUs have been able to hit 7-8 hours for several years now.sorten - Wednesday, January 5, 2022 - link
16 PCIE 5.0 lanes, and if you buy a budget mobo you don't get access to any of them :-)TheinsanegamerN - Wednesday, January 5, 2022 - link
Wrong:https://www.tweaktown.com/reviews/10010/msi-mag-b6...
lanes from the CPU are sill gen 5, unless you buy a dirt cheap board.
ZoZo - Sunday, January 9, 2022 - link
"unless you buy a dirt cheap board"So you're basically just confirming what he said after saying he was wrong? Am I missing something?
TheinsanegamerN - Friday, January 14, 2022 - link
budget =! dirt cheap. There are budget $100 board that use PCIe 5.0.dsillers - Wednesday, January 5, 2022 - link
I have a small room and it gets hot in the summer with an 8700 and a 3070 running any type of loads. Any view on the T-series (i7-12700T especially) as a potential way to reduce ambient heat during loads? Don't have any bleeding edge needs but just want my programs and games to run faster.Sorry if it's a dumb question
BushLin - Wednesday, January 5, 2022 - link
You could undervolt and underclock your existing CPU and GPU. Many GPUs are factory clocked to use nearly double the power to get the last 10-20% of performancebrantron - Wednesday, January 5, 2022 - link
This. Frame rate limits also make a world of difference in some games.I have the 8700 (non-K) because I wanted lower power, but it misbehaved with default BIOS. Insane 1.4v for pointless +100 MHz turbo clocks.
Set each core to 4.3 GHz and 1.15v. Try disabling hyper threading, which helps some games.
Those two dropped the temperature like a rock for me.
Check the ring clock speed, mostly to be sure it isn't too low. Some motherboards default to 4 GHz, but others are 3.7 GHz.
RAM near 3200 with lower than standard timings should not take any effort to run stable. I tried mine with an XMP profile for 3600 CAS14 and I did not have to adjust any BIOS settings.
Even with a 35 watt CPU, it's either going to ignore power limits by default, or enforcing it will behave like a laptop, gyrating between ~4 GHz and ~2 GHz when the turbo timer runs out.
andrewaggb - Tuesday, January 11, 2022 - link
Completely agree. Set fps limits to a little above whatever your monitor can handle on the GPu side. On the cpu side you can undervolt or get a lower power cpu. I just suspect saving 30w on the cpu side isn't your main issue though and will be much less meaningful than power savings on the gpu side.vFunct - Thursday, January 6, 2022 - link
Any Xeon models with ECC?GreenReaper - Thursday, January 13, 2022 - link
All of them, I'd imagine. But they weren't announced this time around.Mike Bruzzone - Saturday, January 22, 2022 - link
Dr. Cutress on some data I provided and shared today establishes You Tube (tech tube) subscription base means nothing on impression and cost per impression. You Tube has since removed [?] Dr. Cutress broadcast on the topic chosen for top broadcaster Where did it go?And on the dGPU add in card front - There are good Nvidia and AMD dGPU deals if you buy board house AIB web site direct. You may have to get in line but you will not be ripped off on wholesale and retail marks ups. Buying AIB direct is the Nvidia and AMD strategy to take the channel back seeking N6x and 30x0 inflated prices and channel and retail using AMD and Nvidia margin to buy Alder Lake for resale selling out AMD and Nvidia lifting their product sales margin denied them then spent by channel and retail to Intel for Intel surplus loss cost offset and to purchase Alder Lake. You've heard it from me before, AMD and Nvidia inflated GPU price is being relied by channel to offset Intel CPU surplus inventory loses and retail to buy Alder lake! no wonder there are not sufficient dGPU supply at retail priced right a lot of retail outlets r limiting AMD and Nvidia purchases to use AMD and Nvidia margin gain that should have gone to AMD and Nvidia restocking spent on Intel gets caught, BUY AMD and NVIDIA AIB WEB SITE DIRECT. mb