Thanks Ian for clarifying, I follow you in tweetland, Reddit, and this website. I just realized that you finally use your title just a few weeks ago. I didn't know that you had your doctorate in 2011! :)
I distinctly remember some of the podcasts (with the "old crew" around Anand and Brian Klug) and there was always a fun emphasize on _Dr._ Ian Cutress. I've adopted that in my head when I read his stuff. :D
Ian, all but your paper "Algorithm development in computational chemistry" appear to be behind a pay wall, but did you ever publish these papers else where? I think it would be cool to read your works, even if I can't understand them. Remember the golden rule of RMS, "Restrict papers unto others as you would have others restrict papers unto you". (kidding)
Algorithm Development in Computational Chemistry is my doctorate thesis. It contains everything from my research papers, so really you don't need them.
I ultimately didn't decide where my papers were published. My research supervisor suggested them all.
In view of all of the marketing buzz surrounding "real-time ray tracing" as it is described by the nVidia CEO, himself; "ray tracing" that isn't actually ray tracing, let alone 'real-time ray tracing', ...but is *rather* simply a set of *marketing buzzwords* designed specifically to create and profit from *the illusion* that real-time ray tracing is actually taking place--(when it isn't!....;))--I propose the following "do or die" *test* to 'clear the air' for all time:
*Rooooooll the snare drum!*
I propose adapting and mapping Cinebench 20's *newly available CPU rendering demonstration* to any local NVIDIA RTX GPUs--a move which simply (nothing 'simple' about this suggestion) changes the default rendering target from the local CPU to nVidia's local "real-time ray tracing" D3d GPU-hardware accelerated capability! Exciting, no? (Nah.) In this manner, the sort of "real-time ray tracing" actually supported by RTX should become obvious in a matter of seconds, and it ain't purty...:(
NVIDIA had an old demo that showed off cuda based ray tracing. I ran it on a pair of 570s and you would let the thing run for hours if you wanted a single top notch quality image. Many traces per pixel were needed to make a clean image. At least 16 in my opinion to look decent.
When I heard that nvidia was adding ray tracing accelerated hardware I was a bit confused. When they said 1 trace per pixel and the demos they showed off looked like they were from 1999 I knew it was as bad as I thought.
True the ai denoising does wonders to make it look less noisey. But comming from someone who has been into realtime graphics programming for some time now; there's no free lunch or whatever. Just look at dlss. It looks like the exact same image quality that you upscaled from. The performance just isn't there. And frankly. Even 15 years from now when we have the horsepower to run ray tracing and deep learning image processing, there will be other techniques that supercede them in terms of real time applications. RTX isn't the way forward its keeping us in the past.
Yes, a different technique is needed. While ray tracing can accomplish what they want to do, I believe it's the wrong path to take. There needs to be a simpler mathematical way to compute what is essentially just changing color.
Threadrippers are definitely coming, she made that very clear ! Ian, do you see AMD Ryzen Mobile getting into good laptops like Surface Books or Dell XPS series anytime? The premium laptop segment (other than HP) only has Intel options currently.
The Surface Laptop 3 is HEAVILY rumored to be shipping with at least one SKU using AMD's Picasso (12nm Zen+ APU). I wouldn't expect to see a lot (or any really) of ultra-high end AMD laptops till next year / Ryzen Mobile 3rd Gen though (7nm Zen 2 + Navi).
I really wish someone would have pressed them a little harder on the timeline for TR3. The closest we've gotten to a date was in the financial conference call mentioning Q4. I would hope they'd land it sooner than that.
If AMD were to release a new high performance GPU within eight months of Radeon VII, that would seriously upset customers who purchased a Radeon VII. High end Navi is expected anywhere between November and February. How well it will perform remains to be seen, as well as if it is "high enough" end for you. Competing with a $1000+ card may not be on any roadmap, purely because sales numbers for those are very low. If AMD can compete with the $700-$800 NVIDIA products would be enough.
Agree. Starting in the mid-high range (2070 equivalent) also allows for some time to iron out some last kinks that may become more notable with a larger card. The biggest question for me is if and how they'll manage the scale-up.
Yeah, I know that products very expensive doesn't sell well. The reason to have such products is for the mindset. To say that they are the best, they have the fastest products. That indirectly helps they sell the cheaper products.
Having a halo product, even if it doesn't sell many units is still important. Just like the old example of having the Corvette out front helps sell station wagons, having a halo model at the top of the benchmarking and overclocking world goes a long way to help sell mid range cards.
After watching GN's deep dive on Navi I am more confident that the later models will deliver on competitive high end performance. The problem with Polaris and Vega is that they were hamstrung architectures to a degree and adding CU's beyond a certain point didn't really help. RDNA seems to fix this issue and should scale much better. Hopefully that translates into real performance.
We've gotten used to the idea that new products shouldn't upset people who just bought the old model, but frankly I think that's a side effect of the industry stagnating. The first Core 2 Quad models were ~$1000, and less than a year later you had models not much over $250. That used to happen all the time, and people were used to it as just part of being on the cutting edge. The fact it's happening again is great, as far as I'm concerned-speaking as somebody who tends not to have early adopter money.
The future in GPU design is going to be chiplets. Scaling up in performance there will just be dropping more chiplets and HBM into a package. AMD can use the same fundamental building blocks from a ~$300 mainstream part to a $1000 enthusiast class product. What AMD is doing for Ryzen will happen to Radeon: it is only a matter of time.
Only the lowend/low power devices would it make sense not leverage these advanced packaging techniques.
Navi 21 has its codename plasted all over AMD's Linux drivers, so it's simply a matter of when, not if (ecpe. They wouldn't have said anything if anyone had asked anyways.
So many people wasting time asking softball questions that can be answered with PR fluff.
"Is the rumor mill around AMD products a little out of control sometimes?" Lisa: No, the rumer mill is fucking awesome, without it we woudn't know how many threads to put on our CPUs or how much to charge!
"Intel says artificial benchmarks are bad, what do you think?" Lisa: I'm going to waste a lot of your time to tell you that you can stop doing your job of independently benchmarking our stuff, because our benchmarks are way more trustworthy, we only use the best benchmarks that are not now nor ever have been biased to show AMD in the best light possible.
"If someone wants to run a cloud server, and that cloud server has a CPU and GPU in it, will you sell them a CPU and/or GPU if they want to run GAMES on that cloud server?" Lisa: Yes, but in a lot more words.
"What do you think about tariffs?" Lisa: Tariffs make stock price down go :( :(
"How important is the halo spot in the GPU market for AMD?" Lisa: You know full fucking well we haven't sniffed the halo spot in GPU in seems like 20 fucking years, next question.
AMD graphics has been competitive where it counts. Look at the nv RTX cards, they've got all this ray tracing silicon that you won't even be able to use for probably 3-4 years. (yes I know three AAA titles shoehorned ray tracing into those 3 games because nvidia paid them to).
If there's one thing I'd really like AMD to do it's to drive down prices in the middle ground by undercutting Nvidia in the mainstream with a good enough card. That would do a lot to reign in the ridiculous pricing in graphics right now. Even the nv980's can still play every game made at normal resolutions with full graphics quality. The high end has pushed out so far at this point that unless you are doing renders or something for a living or gaming at 4k resolution you are just wasting money for little benefit.
I say this because AMD has already forced Intel to cut prices on CPU's and I have no doubt they will have to cut prices again once Ryzen 3 is shipping. The More competition the better.
Here is a question that I wish AMD would answer: Intel has Quick Sync and Premiere/After effects uses it a lot, so when you compare 2 machines with the same GPU but with different CPU (with the same amount of cores) - the Intel based machine wins since Adobe uses the Quick Sync a lot, so the end user gets a faster encoding/rendering compared to an AMD based machine.
Why doesn't AMD create such a technology on their CPU's silicone?
Adobe's optimization is for that specific Intel feature, which Intel likely paid them to do. For AMD to take advantage of it they would have to make a processor extension that is functionally the same and even appears to the OS as identical to Intel's. This likely isn't possible without violating some form of IP. If it were then when Intel made SSE in the 90's AMD wouldn't have called theirs 3DNow and it wouldn't have had a slightly different implementation.
AMD has broad license rights to all Intel IP due to their cross-license agreement (don't forget AMD create x86-64). They could most certainly implement such an instruction feature, but they'd have to do so without using Intel's specific implementation, it would need to be a new implementation of the same instruction set. Just like AMD uses SSE and it's variants in their CPU's they could implement this if there was a desire and market reason to do so.
My bet is that if such a feature doesn't exist in AMD's silicon it's because it's not worth it and/or marketing fluff that Intel created, and that the transistor budget for this feature is better spent on other items like improving IPC. Intel has a big tendency to create marking fluff features like this which no one would even bother implementing if Intel didn't pay them to do it.
They don't have rights to ALL Intel IP. Or even all x86 IP, though they definitely got more favorable terms after AMD64. That's why the Athlons didn't drop right into Slot 1 and AMD had to have their own special motherboards. Intel wouldn't LET them use the new interface. It is also why, today, you can't throw a Ryzen into a Core iWhatever board.
APIs exist to solve that problem. It's not analogous to instruction set extensions, because you can't really use an API to be portable across those - your code is either compiled with them or it's not.
But for video encode/decode, you could pretty much use an API like DXVA to work with whatever hardware-based codec engine was installed. And, not surprisingly, (some) Nvidia GPUs have faster hardware codecs than Intel's QSV or any Radeon.
I think you already know the answer to your question because of your use of brackets.
Current AMD CPUs have more cores. The quality of software encoding is vastly superior and having a couple extra cores is preferred over a hardware encoder because you can set a software encoder to similarly crappy quality and it'll be just as fast as a hardware encoder.
Hardware encoders are really only good for real time streaming or low power recording purposes because they aggressively trade quality for speed.
I know Intel has been focusing on encode quality since at least Haswell, but I don't have a link handy. Anyone could feel free to dig it up, or drop a link for the equivalent AMD tech. I'm not trying to be biased, here.
I would have liked more questions regarding their mobile strategy. They're clearly a distant second to Intel in the segment and continues that path. Intel has twice the cores and higher frequencies. They have a lot more SKUs and it seems working a lot more closely with OEMs.
What I would have liked was some comments on whether they plan to accelerate their mobile efforts to get 7nm products into mobile (like within the year or around CES). Ice Lake also seems to have caught up (memory improvements notwithstanding) in GPU. AMD needs RDNA and LPDDR4X in mobile as well as more cores and better power efficiency.
Essentially they need a much more aggressive timeframe and technology leap to also oust Intel from mobile.
I'd be interested in that too. Why is mobile Zen always one generation behind desktop? I'm guessing it made sense at some point when a part wasn't ready or manufacturing was behind, but surely they can catch up? They're essentially selling a one year old laptop as new.
The APU's are completely different die designs to the single CPU chiplet that can scale from deaktops to servers, and AMD has an extremely limited number of silicon design teams. The only way to get the APU's "up-to-date" would have been to have the CPU's be a year behind instead, which would have been WAAAAAY WORSE! AMD with their limited resources, has to smartly pick it's battles.
There are many new, lower-spec products. That's what an APU is, usually. It's not the fastest CPU or GPU. It's a cheap-and-cheerful combination that happens to use that 12nm capacity they paid for.
You see something similar with Athlons - they're not going to be selling those yet for Zen 2 because it would be using up chiplets that could be selling at a higher price. If you want Athlon performance you are probably OK with Zen+. If not, you can pay more and just down-level the wattage.
There is a market for very high-level APUs, but right now Intel has a stranglehold on it, and it will take time and consistent delivery at the low-end to change that, because these products are designed well in advance (which is why you often see them using a last-gen CPU).
Intel's mainstream laptop chips seem to be cut from the same cloth as their desktop chips. In AMD's case, their mainstream desktop CPUs don't have an iGPU, so the APU needs to be a separate design. As it's a derivative design, there's some inherent lag.
Does anybody knows why mobile computers with AMD processors, have less configuration options from computer manufacturers side, comparing with Intel ones? At least at Lenovo range...
Will AMD pursue anything in the low power mobile segment? Intel has cheap, gimped Atoms and Pentiums with sub-6W TDP while their Core Y chips are very expensive. An APU to compete with the Pentium 4415Y with i5 performance would be a welcome competitor in the space.
I suspect AMD have give up on mobile segment, Intel is too pervasive and capable of great discounts on SKUs. The funny thing is that all this hipe on AMD is caused by a single piece of silicon glued with a I/O chip ala Lego. Basically AMD is showing the same silicon badly glued in different manners on a package. Honestly i am not impressed, there is very little of innovation here, basically no news.....only the hope that the prices will are lower than Intel. Still i remember the last price war was the AMD ruin. They must to be cautious, Intel is so devilish that they could go in red for a full year destroying the competition definitively.
badly glued together ? wth ?, it's glorious, scales and so on. one chip for quadcore up to 64 cores, one design and if you don't understand the implications of that.. well sorry.. its half a billion dollars quick just to start making a chip and it's intel who needs to be vary of a price war this time around! The chiplet design isn't for you as a customer, it's for AMD. However, there is a minimum price to this design and that is 100 usd, under that a new design must be made. if AMD persue that market is up for discussion, 45W laptops is definitely something zen2 will do really well in. 15W with i/o die, well amd systems doesn't need chipsets unlike Intel, so that is 2.6 w saved but still I don't feel it will scale at all down to that power envelope.. it all remains to be seen.
Gondalf you criticize amd for using " lego " in their cpus.. but guess what.. intel is basically doing the same thing/idea : https://www.anandtech.com/show/14211/intels-interc... " They must to be cautious, Intel is so devilish that they could go in red for a full year destroying the competition definitively. " im sure the FTC would be watching intel ( yet again ) if they went back to the same tactics that cost them a few billion, that was paid to AMD......
We had to wait for 2 years or so for AMD to finally include the Vega mobile drivers in their driver package. Had to use various hacks to get working iGPU in my laptop. Will this repeat in the next gen Ryzen Mobile parts ? And I got the famous HP x360 that Dr. Su promoted on her facebook (or twitter). The experience up until the AMD driver was released was frustration for quite a lot of people.
So they need to tighten up their game in mobile space quite a lot, the OEMs don't give a flying sh.t about AMD there ...
"Lisa Su: THATIC was formed several years ago, and we did the original technology transfer at that point in time. We are continuing the joint venture, and most of the work happens on the joint venture side."
the CHIPSET FANS are loud. at least when you build really silent systems they are a problem. and don´t be fooled that they claim to be semi passive. i have seen early x570 tests today that made clear the fans turn even under lower load. you don´t need to be running a game or two SSD benchmarks. it´s a nightmare for me as i like to build systems that are unnoticabe under low load. and we all know that with time these small fans become an even bigger issue.
"Ian Cutress: When I spoke with Mark Papermaster at CES, he explained to be that AMD has one CPU architecture group and two implementation groups." Should be "me" not "be": "Ian Cutress: When I spoke with Mark Papermaster at CES, he explained to me that AMD has one CPU architecture group and two implementation groups."
"Ian Cutress: When I spoke with Mark Papermaster at CES, he explained to be that AMD has one CPU architecture group and two implementation groups." Should be "me" not "be": "Ian Cutress: When I spoke with Mark Papermaster at CES, he explained to me that AMD has one CPU architecture group and two implementation groups."
A statement from AMD noted that they were expecting Ryzen 3000 to be competing head-to-head with Ice Lake. Given that, since Ice Lake was to double floating-point muscle per core, in order to provide AVX-512 support, instead of just catching up with Intel's previous generation by doubling floating-point muscle in Ryzen 3000, shouldn't it have been quadrupled, with AVX-512 support included? That would be my question or my advice for AMD. Of course, they may have good reasons: AVX-512 may well be overkill, increasing cost and power consumption without commensurate benefits: the situation is no longer what it was in the Bulldozer days.
I also hope there will be some discussion or testing of gaming with SMT disabled, at least on the 12-core. The idea being that with so many cores it's feasible to do this if the concern is with performance across apps *and* games. Few apps will use more than 12 cores, and a few games will do better without SMT.
I just got around to reading this and my first thought was: there was at least 2 rumors in that article I started to get AMD all riled up. I would love to meet some of their senior engineers one day. Many will claim otherwise, but I could take a Zen 2 “CPU” and turn it into something that would have multiple people crap their pants. Zen 2 is like the original Microsoft Word. Nowhere near it’s potential. Robert, etc. have no idea the potential of Zen 2. To move on to Zen 3 or Zen 4, if they are indeed new architectures, is a waste IMO. With only public info, I have discovered ways to double Zen 2 IPC while not increasing frequency/core count. Zen 2 is such a scalable architecture it seems such a waste to throw it away in a year.
Not without increasing the silicon footprint, reducing power efficiency, and only having the extra instruction throughput exploited in very limited set of cases.
I think it's probably well-tuned for its intended use cases. They do lots of simulations with different workloads, in order to facilitate this.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
84 Comments
Back to Article
quantumshadow44 - Wednesday, June 26, 2019 - link
Ian is Dr in what?Ian Cutress - Wednesday, June 26, 2019 - link
Computational Chemistry, from Oxford. Awarded in 2011 with nine first-author research papers.https://scholar.google.com/citations?user=8ZSmKjAA...
drexnx - Wednesday, June 26, 2019 - link
other tech sites:"How much of your life do you not want to be ray traced?"
Anandtech:
"How many molecules are required to measure a cyclic voltammogram?"
quantumshadow44 - Wednesday, June 26, 2019 - link
nice. You are good man.thinklink - Wednesday, June 26, 2019 - link
Thanks Ian for clarifying, I follow you in tweetland, Reddit, and this website. I just realized that you finally use your title just a few weeks ago. I didn't know that you had your doctorate in 2011! :)IanCutress - Wednesday, June 26, 2019 - link
I had a number of people over the years who I respect in the industry tell me I should be using my title, so I made the call after E3CaedenV - Wednesday, June 26, 2019 - link
Good call!zmatt - Wednesday, June 26, 2019 - link
All these years and you only started recently using your title. I had assumed you had recently earned your doctorate.ksec - Wednesday, June 26, 2019 - link
Well he is British, Understatement is a culture there.Death666Angel - Thursday, June 27, 2019 - link
I distinctly remember some of the podcasts (with the "old crew" around Anand and Brian Klug) and there was always a fun emphasize on _Dr._ Ian Cutress. I've adopted that in my head when I read his stuff. :Dballsystemlord - Monday, July 1, 2019 - link
Ian, all but your paper "Algorithm development in computational chemistry" appear to be behind a pay wall, but did you ever publish these papers else where? I think it would be cool to read your works, even if I can't understand them.Remember the golden rule of RMS, "Restrict papers unto others as you would have others restrict papers unto you". (kidding)
Ian Cutress - Monday, September 16, 2019 - link
Algorithm Development in Computational Chemistry is my doctorate thesis. It contains everything from my research papers, so really you don't need them.I ultimately didn't decide where my papers were published. My research supervisor suggested them all.
WaltC - Wednesday, June 26, 2019 - link
In view of all of the marketing buzz surrounding "real-time ray tracing" as it is described by the nVidia CEO, himself; "ray tracing" that isn't actually ray tracing, let alone 'real-time ray tracing', ...but is *rather* simply a set of *marketing buzzwords* designed specifically to create and profit from *the illusion* that real-time ray tracing is actually taking place--(when it isn't!....;))--I propose the following "do or die" *test* to 'clear the air' for all time:*Rooooooll the snare drum!*
I propose adapting and mapping Cinebench 20's *newly available CPU rendering demonstration* to any local NVIDIA RTX GPUs--a move which simply (nothing 'simple' about this suggestion) changes the default rendering target from the local CPU to nVidia's local "real-time ray tracing" D3d GPU-hardware accelerated capability! Exciting, no? (Nah.) In this manner, the sort of "real-time ray tracing" actually supported by RTX should become obvious in a matter of seconds, and it ain't purty...:(
Opencg - Thursday, June 27, 2019 - link
NVIDIA had an old demo that showed off cuda based ray tracing. I ran it on a pair of 570s and you would let the thing run for hours if you wanted a single top notch quality image. Many traces per pixel were needed to make a clean image. At least 16 in my opinion to look decent.When I heard that nvidia was adding ray tracing accelerated hardware I was a bit confused. When they said 1 trace per pixel and the demos they showed off looked like they were from 1999 I knew it was as bad as I thought.
True the ai denoising does wonders to make it look less noisey. But comming from someone who has been into realtime graphics programming for some time now; there's no free lunch or whatever. Just look at dlss. It looks like the exact same image quality that you upscaled from. The performance just isn't there. And frankly. Even 15 years from now when we have the horsepower to run ray tracing and deep learning image processing, there will be other techniques that supercede them in terms of real time applications. RTX isn't the way forward its keeping us in the past.
mode_13h - Thursday, June 27, 2019 - link
IIRC, one article I read claimed that cinema production uses about 1k rays/pixel.Dug - Thursday, June 27, 2019 - link
Yes, a different technique is needed. While ray tracing can accomplish what they want to do, I believe it's the wrong path to take. There needs to be a simpler mathematical way to compute what is essentially just changing color.HollyDOL - Thursday, June 27, 2019 - link
Please define what is color thenmode_13h - Thursday, June 27, 2019 - link
https://developer.nvidia.com/optix(This text is to convince the spam filter that my mic-drop of a link isn't actually spam.)
Teckk - Wednesday, June 26, 2019 - link
Threadrippers are definitely coming, she made that very clear !Ian, do you see AMD Ryzen Mobile getting into good laptops like Surface Books or Dell XPS series anytime? The premium laptop segment (other than HP) only has Intel options currently.
Cooe - Wednesday, June 26, 2019 - link
The Surface Laptop 3 is HEAVILY rumored to be shipping with at least one SKU using AMD's Picasso (12nm Zen+ APU). I wouldn't expect to see a lot (or any really) of ultra-high end AMD laptops till next year / Ryzen Mobile 3rd Gen though (7nm Zen 2 + Navi).rahvin - Wednesday, June 26, 2019 - link
I really wish someone would have pressed them a little harder on the timeline for TR3. The closest we've gotten to a date was in the financial conference call mentioning Q4. I would hope they'd land it sooner than that.Gustavoar - Wednesday, June 26, 2019 - link
One big question that you guys missed is asking if AMD has any plans in the near future to compete with Nvidia in the high end like 2080 and 2080tiIanCutress - Wednesday, June 26, 2019 - link
Ian Cutress: How important is the halo spot in the GPU market for AMD?David Wang: Very very important. I love to be able to compete very well with NVIDIA.
Gustavoar - Wednesday, June 26, 2019 - link
Yeah, I was just saying that the question wasn't specific enough. Important doesn't mean they plan to compete in the near future.Targon - Wednesday, June 26, 2019 - link
If AMD were to release a new high performance GPU within eight months of Radeon VII, that would seriously upset customers who purchased a Radeon VII. High end Navi is expected anywhere between November and February. How well it will perform remains to be seen, as well as if it is "high enough" end for you. Competing with a $1000+ card may not be on any roadmap, purely because sales numbers for those are very low. If AMD can compete with the $700-$800 NVIDIA products would be enough.eastcoast_pete - Wednesday, June 26, 2019 - link
Agree. Starting in the mid-high range (2070 equivalent) also allows for some time to iron out some last kinks that may become more notable with a larger card. The biggest question for me is if and how they'll manage the scale-up.Gustavoar - Wednesday, June 26, 2019 - link
Yeah, I know that products very expensive doesn't sell well. The reason to have such products is for the mindset. To say that they are the best, they have the fastest products. That indirectly helps they sell the cheaper products.zmatt - Wednesday, June 26, 2019 - link
Having a halo product, even if it doesn't sell many units is still important. Just like the old example of having the Corvette out front helps sell station wagons, having a halo model at the top of the benchmarking and overclocking world goes a long way to help sell mid range cards.After watching GN's deep dive on Navi I am more confident that the later models will deliver on competitive high end performance. The problem with Polaris and Vega is that they were hamstrung architectures to a degree and adding CU's beyond a certain point didn't really help. RDNA seems to fix this issue and should scale much better. Hopefully that translates into real performance.
ArcadeEngineer - Wednesday, June 26, 2019 - link
We've gotten used to the idea that new products shouldn't upset people who just bought the old model, but frankly I think that's a side effect of the industry stagnating. The first Core 2 Quad models were ~$1000, and less than a year later you had models not much over $250. That used to happen all the time, and people were used to it as just part of being on the cutting edge. The fact it's happening again is great, as far as I'm concerned-speaking as somebody who tends not to have early adopter money.Kevin G - Thursday, June 27, 2019 - link
The future in GPU design is going to be chiplets. Scaling up in performance there will just be dropping more chiplets and HBM into a package. AMD can use the same fundamental building blocks from a ~$300 mainstream part to a $1000 enthusiast class product. What AMD is doing for Ryzen will happen to Radeon: it is only a matter of time.Only the lowend/low power devices would it make sense not leverage these advanced packaging techniques.
Cooe - Wednesday, June 26, 2019 - link
Navi 21 has its codename plasted all over AMD's Linux drivers, so it's simply a matter of when, not if (ecpe. They wouldn't have said anything if anyone had asked anyways.EdgeOfDetroit - Wednesday, June 26, 2019 - link
So many people wasting time asking softball questions that can be answered with PR fluff."Is the rumor mill around AMD products a little out of control sometimes?" Lisa: No, the rumer mill is fucking awesome, without it we woudn't know how many threads to put on our CPUs or how much to charge!
"Intel says artificial benchmarks are bad, what do you think?" Lisa: I'm going to waste a lot of your time to tell you that you can stop doing your job of independently benchmarking our stuff, because our benchmarks are way more trustworthy, we only use the best benchmarks that are not now nor ever have been biased to show AMD in the best light possible.
"If someone wants to run a cloud server, and that cloud server has a CPU and GPU in it, will you sell them a CPU and/or GPU if they want to run GAMES on that cloud server?" Lisa: Yes, but in a lot more words.
"What do you think about tariffs?" Lisa: Tariffs make stock price down go :( :(
"How important is the halo spot in the GPU market for AMD?" Lisa: You know full fucking well we haven't sniffed the halo spot in GPU in seems like 20 fucking years, next question.
zmatt - Wednesday, June 26, 2019 - link
Closer to 10-15 years really. They had some great cards throughout the 2000's.extide - Wednesday, June 26, 2019 - link
October 24, 2013. R9 290X was the last halo GPU they sold and it wasn't THAT long ago. They can and I am sure they will get there again.Cooe - Wednesday, June 26, 2019 - link
Wrong. That'd be 2015's R9 Fury X which was just as big & ambitious (if not more so) than Big Maxwell (GTX 980 Ti).mode_13h - Thursday, June 27, 2019 - link
Yeah, Fury X was competitive... if you don't care about heat/power/noise.rahvin - Wednesday, June 26, 2019 - link
AMD graphics has been competitive where it counts. Look at the nv RTX cards, they've got all this ray tracing silicon that you won't even be able to use for probably 3-4 years. (yes I know three AAA titles shoehorned ray tracing into those 3 games because nvidia paid them to).If there's one thing I'd really like AMD to do it's to drive down prices in the middle ground by undercutting Nvidia in the mainstream with a good enough card. That would do a lot to reign in the ridiculous pricing in graphics right now. Even the nv980's can still play every game made at normal resolutions with full graphics quality. The high end has pushed out so far at this point that unless you are doing renders or something for a living or gaming at 4k resolution you are just wasting money for little benefit.
I say this because AMD has already forced Intel to cut prices on CPU's and I have no doubt they will have to cut prices again once Ryzen 3 is shipping. The More competition the better.
hetzbh - Wednesday, June 26, 2019 - link
Here is a question that I wish AMD would answer:Intel has Quick Sync and Premiere/After effects uses it a lot, so when you compare 2 machines with the same GPU but with different CPU (with the same amount of cores) - the Intel based machine wins since Adobe uses the Quick Sync a lot, so the end user gets a faster encoding/rendering compared to an AMD based machine.
Why doesn't AMD create such a technology on their CPU's silicone?
Surfacround - Wednesday, June 26, 2019 - link
when using handbrake to encode, quicksync is IMO “quick-and-dirty”... and if it is not, it is a great lock-in for the product using it.(i believe handbrake does a better job without quicksync)
zmatt - Wednesday, June 26, 2019 - link
Adobe's optimization is for that specific Intel feature, which Intel likely paid them to do. For AMD to take advantage of it they would have to make a processor extension that is functionally the same and even appears to the OS as identical to Intel's. This likely isn't possible without violating some form of IP. If it were then when Intel made SSE in the 90's AMD wouldn't have called theirs 3DNow and it wouldn't have had a slightly different implementation.rahvin - Wednesday, June 26, 2019 - link
AMD has broad license rights to all Intel IP due to their cross-license agreement (don't forget AMD create x86-64). They could most certainly implement such an instruction feature, but they'd have to do so without using Intel's specific implementation, it would need to be a new implementation of the same instruction set. Just like AMD uses SSE and it's variants in their CPU's they could implement this if there was a desire and market reason to do so.My bet is that if such a feature doesn't exist in AMD's silicon it's because it's not worth it and/or marketing fluff that Intel created, and that the transistor budget for this feature is better spent on other items like improving IPC. Intel has a big tendency to create marking fluff features like this which no one would even bother implementing if Intel didn't pay them to do it.
Lord of the Bored - Thursday, June 27, 2019 - link
They don't have rights to ALL Intel IP. Or even all x86 IP, though they definitely got more favorable terms after AMD64.That's why the Athlons didn't drop right into Slot 1 and AMD had to have their own special motherboards. Intel wouldn't LET them use the new interface. It is also why, today, you can't throw a Ryzen into a Core iWhatever board.
mode_13h - Thursday, June 27, 2019 - link
An analogous feature exists in all AMD GPUs.QSV is not like an instruction set extension - it's a functional block that's part of Intel's iGPU. Their CPUs with no GPU don't have it.
mode_13h - Thursday, June 27, 2019 - link
APIs exist to solve that problem. It's not analogous to instruction set extensions, because you can't really use an API to be portable across those - your code is either compiled with them or it's not.But for video encode/decode, you could pretty much use an API like DXVA to work with whatever hardware-based codec engine was installed. And, not surprisingly, (some) Nvidia GPUs have faster hardware codecs than Intel's QSV or any Radeon.
https://developer.nvidia.com/nvidia-video-codec-sd...
Radeons' codec support is mostly about letting users encode/decode just 1 or 2 streams in realtime.
tk11 - Wednesday, June 26, 2019 - link
I think you already know the answer to your question because of your use of brackets.Current AMD CPUs have more cores. The quality of software encoding is vastly superior and having a couple extra cores is preferred over a hardware encoder because you can set a software encoder to similarly crappy quality and it'll be just as fast as a hardware encoder.
Hardware encoders are really only good for real time streaming or low power recording purposes because they aggressively trade quality for speed.
mode_13h - Thursday, June 27, 2019 - link
If that was once true, it's certainly not any more.Nvidia compares their encode performance at different quality levels, and uses x264 as a reference.
https://developer.nvidia.com/nvidia-video-codec-sd...
I know Intel has been focusing on encode quality since at least Haswell, but I don't have a link handy. Anyone could feel free to dig it up, or drop a link for the equivalent AMD tech. I'm not trying to be biased, here.
Dug - Thursday, June 27, 2019 - link
Because more and more people are moving away from Adobe, so it would be a lot of money and expense for little gains for a small amount of software.Trixanity - Wednesday, June 26, 2019 - link
I would have liked more questions regarding their mobile strategy. They're clearly a distant second to Intel in the segment and continues that path. Intel has twice the cores and higher frequencies. They have a lot more SKUs and it seems working a lot more closely with OEMs.What I would have liked was some comments on whether they plan to accelerate their mobile efforts to get 7nm products into mobile (like within the year or around CES). Ice Lake also seems to have caught up (memory improvements notwithstanding) in GPU. AMD needs RDNA and LPDDR4X in mobile as well as more cores and better power efficiency.
Essentially they need a much more aggressive timeframe and technology leap to also oust Intel from mobile.
Mr Perfect - Wednesday, June 26, 2019 - link
I'd be interested in that too. Why is mobile Zen always one generation behind desktop? I'm guessing it made sense at some point when a part wasn't ready or manufacturing was behind, but surely they can catch up? They're essentially selling a one year old laptop as new.Cooe - Wednesday, June 26, 2019 - link
The APU's are completely different die designs to the single CPU chiplet that can scale from deaktops to servers, and AMD has an extremely limited number of silicon design teams. The only way to get the APU's "up-to-date" would have been to have the CPU's be a year behind instead, which would have been WAAAAAY WORSE! AMD with their limited resources, has to smartly pick it's battles.GreenReaper - Wednesday, June 26, 2019 - link
There are many new, lower-spec products. That's what an APU is, usually. It's not the fastest CPU or GPU. It's a cheap-and-cheerful combination that happens to use that 12nm capacity they paid for.You see something similar with Athlons - they're not going to be selling those yet for Zen 2 because it would be using up chiplets that could be selling at a higher price. If you want Athlon performance you are probably OK with Zen+. If not, you can pay more and just down-level the wattage.
There is a market for very high-level APUs, but right now Intel has a stranglehold on it, and it will take time and consistent delivery at the low-end to change that, because these products are designed well in advance (which is why you often see them using a last-gen CPU).
mode_13h - Thursday, June 27, 2019 - link
Intel's mainstream laptop chips seem to be cut from the same cloth as their desktop chips. In AMD's case, their mainstream desktop CPUs don't have an iGPU, so the APU needs to be a separate design. As it's a derivative design, there's some inherent lag.Ruimanalmeida - Wednesday, June 26, 2019 - link
Does anybody knows why mobile computers with AMD processors, have less configuration options from computer manufacturers side, comparing with Intel ones? At least at Lenovo range...IanCutress - Wednesday, June 26, 2019 - link
That's a question to put to the OEMs.http://www.anandtech.com/show/10000
boozed - Wednesday, June 26, 2019 - link
You found somebody the same height as Ian!Manch - Thursday, June 27, 2019 - link
They like Twins!!!serendip - Wednesday, June 26, 2019 - link
Will AMD pursue anything in the low power mobile segment? Intel has cheap, gimped Atoms and Pentiums with sub-6W TDP while their Core Y chips are very expensive. An APU to compete with the Pentium 4415Y with i5 performance would be a welcome competitor in the space.Gondalf - Thursday, June 27, 2019 - link
I suspect AMD have give up on mobile segment, Intel is too pervasive and capable of great discounts on SKUs.The funny thing is that all this hipe on AMD is caused by a single piece of silicon glued with a I/O chip ala Lego. Basically AMD is showing the same silicon badly glued in different manners on a package.
Honestly i am not impressed, there is very little of innovation here, basically no news.....only the hope that the prices will are lower than Intel.
Still i remember the last price war was the AMD ruin. They must to be cautious, Intel is so devilish that they could go in red for a full year destroying the competition definitively.
oleyska - Thursday, June 27, 2019 - link
badly glued together ?wth ?, it's glorious, scales and so on.
one chip for quadcore up to 64 cores, one design and if you don't understand the implications of that.. well sorry..
its half a billion dollars quick just to start making a chip and it's intel who needs to be vary of a price war this time around!
The chiplet design isn't for you as a customer, it's for AMD.
However, there is a minimum price to this design and that is 100 usd, under that a new design must be made.
if AMD persue that market is up for discussion, 45W laptops is definitely something zen2 will do really well in.
15W with i/o die, well amd systems doesn't need chipsets unlike Intel, so that is 2.6 w saved but still I don't feel it will scale at all down to that power envelope..
it all remains to be seen.
Korguz - Thursday, June 27, 2019 - link
Gondalfyou criticize amd for using " lego " in their cpus.. but guess what.. intel is basically doing the same thing/idea : https://www.anandtech.com/show/14211/intels-interc...
" They must to be cautious, Intel is so devilish that they could go in red for a full year destroying the competition definitively. " im sure the FTC would be watching intel ( yet again ) if they went back to the same tactics that cost them a few billion, that was paid to AMD......
mode_13h - Monday, July 1, 2019 - link
> I suspect AMD have give up on mobile segmentTheir agreement with Samsung would suggest otherwise.
oRAirwolf - Wednesday, June 26, 2019 - link
I always feel like these q&a sessions are so devoid any meaningful conversation and answers. It's just fluff and non answers.IanCutress - Thursday, June 27, 2019 - link
It's difficult to drill down to a specific topic with multiple journalists in the room.haplo602 - Thursday, June 27, 2019 - link
We had to wait for 2 years or so for AMD to finally include the Vega mobile drivers in their driver package. Had to use various hacks to get working iGPU in my laptop. Will this repeat in the next gen Ryzen Mobile parts ? And I got the famous HP x360 that Dr. Su promoted on her facebook (or twitter). The experience up until the AMD driver was released was frustration for quite a lot of people.So they need to tighten up their game in mobile space quite a lot, the OEMs don't give a flying sh.t about AMD there ...
scineram - Thursday, June 27, 2019 - link
Stop lying!mode_13h - Thursday, June 27, 2019 - link
Thanks, but I wish you'd asked why they're not selling Ryzen Pro APUs to the general public.IanCutress - Thursday, June 27, 2019 - link
They feel that that segment of the market is better addressed by the Ryzen line.mode_13h - Thursday, June 27, 2019 - link
Huh? So, they've said as much?A lot of NAS & home server builders would pay a premium for a Ryzen Pro APU, so that we don't have to run a dGPU.
They could even sell them direct. Just OEM packaging.
peevee - Thursday, June 27, 2019 - link
"Lisa Su: THATIC was formed several years ago, and we did the original technology transfer at that point in time. We are continuing the joint venture, and most of the work happens on the joint venture side."Lisa Su, a Chinese agent.
mode_13h - Thursday, June 27, 2019 - link
If she were a Chinese agent, then she wouldn't need THATIC to transfer the IP - she'd just give it away, for free.Korguz - Thursday, June 27, 2019 - link
wow peevee..... a little racist are we ??Gothmoth - Thursday, June 27, 2019 - link
the CHIPSET FANS are loud. at least when you build really silent systems they are a problem. and don´t be fooled that they claim to be semi passive. i have seen early x570 tests today that made clear the fans turn even under lower load. you don´t need to be running a game or two SSD benchmarks. it´s a nightmare for me as i like to build systems that are unnoticabe under low load. and we all know that with time these small fans become an even bigger issue.mode_13h - Thursday, June 27, 2019 - link
Can't you just mod the mobo to use a bigger heatsink/fan?I know it's not a nice problem to have, but I guess it's either that or don't use a X570.
mat9v - Friday, June 28, 2019 - link
I didn't, I don't and probably won't in the future. Next please.Korguz - Friday, June 28, 2019 - link
huh ??ballsystemlord - Monday, July 1, 2019 - link
Only 1 spelling error, good work Ian!"Ian Cutress: When I spoke with Mark Papermaster at CES, he explained to be that AMD has one CPU architecture group and two implementation groups."
Should be "me" not "be":
"Ian Cutress: When I spoke with Mark Papermaster at CES, he explained to me that AMD has one CPU architecture group and two implementation groups."
ballsystemlord - Monday, July 1, 2019 - link
Only 1 spelling error, good work Ian!"Ian Cutress: When I spoke with Mark Papermaster at CES, he explained to be that AMD has one CPU architecture group and two implementation groups."
Should be "me" not "be":
"Ian Cutress: When I spoke with Mark Papermaster at CES, he explained to me that AMD has one CPU architecture group and two implementation groups."
ballsystemlord - Monday, July 1, 2019 - link
Oops, website froze and I double posted...quadibloc - Tuesday, July 2, 2019 - link
A statement from AMD noted that they were expecting Ryzen 3000 to be competing head-to-head with Ice Lake. Given that, since Ice Lake was to double floating-point muscle per core, in order to provide AVX-512 support, instead of just catching up with Intel's previous generation by doubling floating-point muscle in Ryzen 3000, shouldn't it have been quadrupled, with AVX-512 support included? That would be my question or my advice for AMD.Of course, they may have good reasons: AVX-512 may well be overkill, increasing cost and power consumption without commensurate benefits: the situation is no longer what it was in the Bulldozer days.
Arbie - Tuesday, July 2, 2019 - link
==> For the upcoming Ryzen 3xxx reviews:I hope we'll see details on the PBO & XFR action. Specifically, how many cores clock how high, on the stock cooler and on premium air.
Thx
Arbie - Tuesday, July 2, 2019 - link
.==> For the upcoming Ryzen 3xxx reviews:
I also hope there will be some discussion or testing of gaming with SMT disabled, at least on the 12-core. The idea being that with so many cores it's feasible to do this if the concern is with performance across apps *and* games. Few apps will use more than 12 cores, and a few games will do better without SMT.
eek2121 - Thursday, July 4, 2019 - link
I just got around to reading this and my first thought was: there was at least 2 rumors in that article I started to get AMD all riled up. I would love to meet some of their senior engineers one day. Many will claim otherwise, but I could take a Zen 2 “CPU” and turn it into something that would have multiple people crap their pants. Zen 2 is like the original Microsoft Word. Nowhere near it’s potential. Robert, etc. have no idea the potential of Zen 2. To move on to Zen 3 or Zen 4, if they are indeed new architectures, is a waste IMO. With only public info, I have discovered ways to double Zen 2 IPC while not increasing frequency/core count. Zen 2 is such a scalable architecture it seems such a waste to throw it away in a year.mode_13h - Thursday, July 4, 2019 - link
Not without increasing the silicon footprint, reducing power efficiency, and only having the extra instruction throughput exploited in very limited set of cases.I think it's probably well-tuned for its intended use cases. They do lots of simulations with different workloads, in order to facilitate this.