Much has been made over the advent of low-level graphics APIs over the last year, with APIs based on this concept having sprouted up on a number of platforms in a very short period of time. For game developers this has changed the API landscape dramatically in the last couple of years, and it’s no surprise that as a result API news has been centered on the annual Game Developers Conference. With the 2015 conference taking place this week, we’re going to hear a lot more about it in the run-up to the release of DirectX 12 and other APIs.

Kicking things off this week is AMD, who is going first with an update on Mantle, their in-house low-level API. The first announced of the low-level APIs and so far limited to AMD’s GCN’s architecture, there has been quite a bit of pondering over the future of the API in light of the more recent developments of DirectX 12 and glNext. AMD in turn is seeking to answer these questions first, before Microsoft and Khronos take the stage later this week for their own announcements.

In a news post on AMD’s gaming website, AMD has announced that due to the progress on DX12 and glNext, the company is changing direction on the API. The API will be sticking around, but AMD’s earlier plans have partially changed. As originally planned, AMD is transitioning Mantle application development from a closed beta to a (quasi) released product – via the release of a programming guide and API reference this month – however AMD’s broader plans to also release a Mantle SDK to allow full access, particularly allowing iit to be implemented on other hardware, has been shelved. In place of that AMD is refocusing Mantle on being a “graphics innovation platform” to develop new technologies.

As far as “Mantle 1.0” is concerned, AMD is acknowledging at this point that Mantle’s greatest benefits – reduced CPU usage due to low-level command buffer submission – is something that DX12 and glNext can do just as well, negating the need for Mantle in this context.  For AMD this is still something of a win because it has led to Microsoft and Khronos implementing the core ideas of Mantle in the first place, but it also means that Mantle would be relegated to a third wheel. As a result AMD is shifting focus, and advising developers looking to tap Mantle for its draw call benefits (and other features also found in DX12/glNext) to just use those forthcoming APIs instead.

Mantle’s new focus in turn is going to be a testbed for future graphics API development.  Along with releasing the specifications for “Mantle 1.0”, AMD will essentially keep the closed beta program open for the continued development of Mantle, building it in conjunction with a limited number of partners in a fashion similar to how Mantle has been developed so far.

Thie biggest change here is that any plans to make Mantle open have been put on hold for the moment with the cancelation of the Mantle SDK. With Mantle going back into development and made redundant by DX12/glNext, AMD has canned what was from the start the hardest to develop/least likely to occur API feature, keeping it proprietary (at least for now) for future development. Which is not to say that AMD has given up on their “open” ideals entirely though, as the company is promising to deliver more information on their long-term plans for the API on the 5th, including their future plans for openness.


Mantle Pipeline States

As for what happens from here, we will have to see what AMD announces later this week. AMD’s announcement is essentially in two parts: today’s disclosure on the status of Mantle, and a further announcement on the 5th. It’s quite likely that AMD already has their future Mantle features in mind, and will want to discuss those after the DX12 and glNext disclosures.

Finally, from a consumer perspective Mantle won’t be going anywhere. Mantle remains in AMD’s drivers and Mantle applications continue to work, and for that matter there are still more Mantle enabled games to come (pretty much anything Frostbite, for a start). How many more games beyond 2015 though – basically anything post-DX12 – remains to be seen, as developers capable of targeting Mantle will almost certainly want to target DX12 as well as soon as it’s ready.

Update 03/03: To add some further context to AMD's announcement, we have the announcement of Vulkan (aka glNext). In short Mantle is being used as a building block for Vulkan, making Vulkan a derivative of Mantle. So although Mantle proper goes back under wraps at AMD, "Mantle 1.0" continues on in an evolved form as Vulkan.

Source: AMD

POST A COMMENT

95 Comments

View All Comments

  • gruffi - Wednesday, March 4, 2015 - link

    No, it's wrong. Nvidia never worked on the DX12 API. AMD did. Nvidia fangirls seem to be really desperate to make up stories considering all the bad Nvidia media lately. Reply
  • D. Lister - Wednesday, March 4, 2015 - link

    So calling someone a girl is an insult from your point of view? You must be one of the smartest people on the internet. Like AMD, do you? Makes perfect sense, considering . Why not buy a lot of AMD stock while you're at it, lol. Reply
  • CPUGPUGURU - Wednesday, March 4, 2015 - link

    In addition to Nvidia’s new Maxwell GPU having top-of-the-line performance and power efficiency, it has another feature that will probably make a lot more difference in the real world: It’s the first GPU to offer full support for Microsoft’s upcoming DirectX 12 and Direct3D 12 graphics APIs. According to Microsoft, it has worked with Nvidia engineers in a “zero-latency environment” for several months to get DX12 support baked into Maxwell and graphics drivers. Even more importantly, Microsoft then worked with Epic to get DirectX 12 support baked into Unreal Engine 4, and to build a tech demo of Fable Legends that uses DX12. Back in March, when Microsoft officially unveiled DirectX 12 (and D3D 12), it surprised a lot of people by proclaiming that most modern Nvidia (Fermi, Kepler, and Maxwell). Reply
  • CPUGPUGURU - Wednesday, March 4, 2015 - link

    Read Learn

    Microsoft officially unveiled DirectX 12 (and D3D 12), it surprised a lot of people by proclaiming that most modern Nvidia (Fermi, Kepler, and Maxwell) support DX12. One of the surprise announcements at the show is that Nvidia will support DX12 on every Fermi, Kepler, and Maxwell-class GPU. That means nearly every GTX 400, 500, and 600 series card will be supported.

    At GDC 2014, Microsoft and Nvidia (NO AMD Here) have taken the lid off DirectX 12 — the new API that promises to deliver low-level, Mantle-like latencies with vastly improved performance and superior hardware utilization compared to DX11. Even better, DirectX 12 (and D3D 12) are backwards compatible with virtually every single GPU from the GTX 400 to the present day.

    Interestingly, AMD isn’t necessarily following suit — the company has indicated that it will support DX12 on all GCN-based hardware.
    Reply
  • FlushedBubblyJock - Friday, March 27, 2015 - link

    WOW - really ?
    " That means nearly every GTX 400, 500, and 600 series card will be supported."
    OMG...

    rofl amd is so hosed !
    Reply
  • Crunchy005 - Wednesday, March 4, 2015 - link

    Wait maxwell was first GPU to support it? Either way I see GCN 1.0, 1.1, and 1.2 all supported here, granted the GCN 1.0 is buggy. Also thats the last 3 generations of GCN supported by AMD. Nvidia also has most of theirs working although looks like at the time of this article they don't have Fermi support. Where are you getting the info that says the Maxwell was the first to support DX12? Also the r9-290x had top of the line performance until the 980 came out...funny how they leapfrog but as soon as nvidia is in the lead Nvidia fans act like AMD never was and deny that nvidia is behind when they are.

    http://anandtech.com/show/8962/the-directx-12-perf...
    Reply
  • CPUGPUGURU - Wednesday, March 4, 2015 - link

    Fermi does support DX12, but Maxwell has full DX12 implementation. AMD's hot watt wasting r9-290x never beat top end Kepler 780/Titan and Maxwell 980 widen the performance per watt gap. Try using a search engine for, "Intel reverse engineered x86 64 bit" and use it for Maxwell DX12 naybe you will kearn a thing or two. Stop believing AMD's hype, they lost all cred years ago, everything AMD spews about is propaganda wrapped by marketing BS. Sandy Bridge came out before any AMD APU and Intel had integrated graphics before Sandy Bridge, Intel Ring Bus runs circles around AMD's northbridge, read "Intel's ring bus is a lot faster than AMD's on-die northbridge—the ring bus design connects the CPU and GPU at 384GB/s, while the link between AMD's northbridge and the GPU is 27GB/s. "

    Sorry but I lost all hope in AMD, each and every CPU/APU and rebranded GPU has been over hyped, IPC cripple, and sucks watts. I am sick of AMD fool tools rewriting history and smearing a new shade lipstick on AMD piggy. I hate BSing fantards who are paid to post hype pumping propaganda, I lived it, built it, benchmark it and sold it, so quit apologizing for AMD's short comings. Its a ARM vs Intel world now there is no need for a watt wasting IPC cripple AMD that offers nothing to the x86 world. So stop living in the past and rewriting history, debt laden AMD fell and can't get up, now late limp and lame AMD lives on propaganda, bogus benchmarks, and hyping pumping the next coming of its CPU/GPU/APU savior. Well its not gonna happen, AMD's ARM core a generic me too up against deep pocketed custom ARMed Armies, seamicro makes money selling Intel Inside severs, Skylake is around the corner, Max Daddy Maxwell is ready and waiting, Its way too late and too lame for debt laden AMD.

    Cry me a Amazon river with the Fat Lady singing from a canoe, Turn out the lights... the party's over.
    Reply
  • FlushedBubblyJock - Friday, March 27, 2015 - link

    Said in an angry loud Nixonian voice with cheeks flappping and jowling and fists shaking by sides:

    " Mantle , We're the Core of the Earth ... "

    bwahahhaaaaaa
    Reply
  • TheJian - Saturday, March 7, 2015 - link

    http://blogs.nvidia.com/blog/2014/03/20/directx-12...
    "Our work with Microsoft on DirectX 12 began more than four years ago with discussions about reducing resource overhead. For the past year, NVIDIA has been working closely with the DirectX team to deliver a working design and implementation of DX12 at GDC.

    Gosalia demonstrated the new API with a tech demo of the Xbox One racing game Forza running on a PC powered by an NVIDIA GeForce Titan Black."

    So exactly a year ago they said they'd been working with them for FOUR years. They even had a demo of an Xbox1 game running on PC hardware with DX12 a YEAR ago at GDC 2014. Literally working hand in hand to get the demo going for a year. Did AMD have a GDC 2014 demo of DX12? No because they were wasting time on Mantle instead of DirectX or OpenGL (which if you're running linux is pretty important, where are AMD's killer linux drivers?).

    How exactly do you think you get a demo of a game NOT made for DX12 working with your hardware, drivers etc a YEAR ago without being involved in working on it for at least the PREVIOUS year as he said hand in hand. I guess you should call Nvidia and tell them they weren't really working on what they were working on...LOL.

    http://www.extremetech.com/gaming/198964-dx12-conf...
    NVidia vs. AMD DX12 Star Swarm. Unlike the rosy slide Anandtech shows, AMD is getting killed here. While you're at it take a look at Nvidia's DX11 vs. AMD's. Killed there too. Clearly Mantle wasn't needed to make DX11 rock a bit more. You just needed to put in some driver time on DX11 instead of something like Mantle correct? OVER 3x faster than 290x for 980. Clearly Nvidia was putting in DirectX driver time just as they said (which affects a TON of games unlike Mantle). This is just last month.

    "The first thing people are going to notice is that the GTX 980 is far faster than the R9 290X in a benchmark that was (rightly) believed to favor AMD as a matter of course when the company released it last year. I’ll reiterate what I said then — Star Swarm is a tech demo, not a final shipping product. While Oxide Games does have plans to build a shipping game around their engine, this particular version is still designed to highlight very specific areas where low-latency APIs can offer huge performance gains."

    So it's AMD's benchmark, but Nvidia slaughtered them in their best case scenario, which is what I call a benchmark made for them that is NOT ever going to be an actual game. They supposedly have plans for a game using the engine but it won't BE a game itself. I really doubt NV would say they were working hand in hand if it wasn't true. Surely MS would have something to say. Who else do you think worked on it with only two major gpu vendors in the running? Why wasn't AMD used for the GDC 2014 demo if it wasn't NV who helped forge it? It's an Xbox1 game running on AMD hardware in a console, but MS chose NV to do the demo? Odd? NO, NV worked on DX12.

    The OP was right, AMD never planned to make it open. OPEN=running on Nvidia hardware as MANTLE not some fork of it. Mantle didn't even work on all of AMD's own hardware, let alone anyone else (intel said they were rebuffed multiple times too).
    Reply
  • 0VERL0RD - Wednesday, March 4, 2015 - link

    Debt laden AMD was busy working with both MS & Sony. Don't think MS cripples Mouth that Feeds XBox for Nvidia who's margins for GPU alone kept then out of both consoles! Reply

Log in

Don't have an account? Sign up now