To many out there it may seem like DirectX 12 is still a brand-new technology – and in some ways it still is – but in fact we’ve now been talking about the graphics API for the better part of half a decade. Microsoft first announced the then-next generation graphics API to much fanfare back at GDC 2014, with the initial iteration shipping as part of Windows 10 a year later. For a multitude of reasons DirectX 12 adoption is still in its early days – software dev cycles are long and OS adoption cycles are longer still – but with their low-level graphics API firmly in place, Microsoft’s DirectX teams are already hard at work on the next generation of graphics technology. And now, as we can finally reveal, the future of DirectX is going to include a significant focus on raytracing.

This morning at GDC 2018 as part of a coordinated release with some of their hardware and software partners, Microsoft is announcing a major new feature addition to the DirectX 12 graphics API: DirectX Raytracing. Exactly what the name says on the tin, DirectX Raytracing will provide a standard API for hardware and software accelerated ray tracing under DirectX, allowing developers to tap into the rendering model for newer and more accurate graphics and effects.

Going hand-in-hand with both new and existing hardware, the DXR command set is meant to provide a standardized means for developers to implement ray tracing in a GPU-friendly manner. Furthermore as an extension of the existing DirectX 12 feature set, DXR is meant to be tightly integrated with traditional rasterization, allowing developers to mix the two rendering techniques to suit their needs and to use the rendering technique that delivers the best effects/best performance as necessary.

Why Ray Tracing Lights the Future

Historically, ray tracing and its close colleague path tracing have in most respects been the superior rendering techniques. By rendering a scene more like the human eye works – by focusing on where rays of light come from, what they interact with, and how they interact with those objects – it can produce a far more accurate image overall, especially when it comes to lighting in all of its forms. Specifically, ray tracing works like human vision in reverse (in a manner of speaking), casting rays out from the viewer to objects and then bouncing from those objects to the rest of the world, ultimately determining the interactions between light sources and objects in a realistic manner. As a result, ray tracing has been the go-to method for high quality rendering, particularly static images, movies, and even pre-baked game assets.


Ray Tracing Diagram (Henrik / CC BY-SA 4.0)

However the computational costs of photorealistic ray tracing are incredible due to all of the work required to not only trace individual rays, but also the sheer number of them. This is a ray for every screen pixel (or more) cast, reflected, refracted, and ultimately recursively generated many times over. Bouncing from object to object, refracting through objects, diffusing along other objects, all to determine all of the light and color values that ultimately influence a single pixel.


An illustration of ray recursion in a scene

As a consequence of this, ray tracing has not been suitable for real-time rendering, limiting its use to “offline” use cases where systems can take as much time as they need. Instead, real-time graphics has been built around rasterization, a beautiful, crass hack that fundamentally projects 3D space on to a 2D plane. By reducing much of the rendering process to a 2D image, this greatly simplifies the total workload, making real-time rendering practical. The downside to this method is, as one might expect, that it’s not as high quality; instead of accurate light simulations, pixel & compute shaders provide approximations of varying quality. And ultimately shaders can’t entirely make up for the lack of end-to-end 3D processing and simulations.

While practical considerations mean that rasterization has – and will continue to be – the dominant real-time rendering technique for many years to come, the holy grail of real-time graphics is still ray tracing, or at least the quality it can provide. As a result, there’s been an increasing amount of focus on merging ray tracing with rasterization in order to combine the strengths of both rendering techniques. This means pairing rasterization’s efficiency and existing development pipeline with the accuracy of ray tracing.

While just how to best do that is going to be up to developers on a game-by-game basis, the most straightforward method is to rasterize a scene and then use ray tracing to light it, following that up with another round of pixel shaders to better integrate the two and add any final effects. This leverages ray tracing’s greatest strengths with lighting and shadowing, allowing for very accurate lighting solutions that properly simulate light reflections, diffusion, scattering, ambient occlusion, and shadows. Or to put this another way: faking realistic lighting in rasterization is getting to be so expensive that it may just as well be easier to do it the right way to begin with.

Enter DirectX Raytracing

DirectX Raytracing then is Microsoft laying the groundwork to make this practical by creating an API for ray tracing that works with the company’s existing rasterization APIs. Technically speaking GPUs are already generic enough that today developers could implement a form of ray tracing just through shaders, however doing so would miss out on the opportunity to tap into specialized GPU hardware units to help with the task, not to mention the entire process being non-standard. So both to expose new hardware capabilities and abstract some of the optimization work around this process to GPU vendors, instead this functionality is being implemented through new API commands for DirectX 12.

But like Microsoft’s other DirectX APIs it’s important to note that the company isn’t defining how the hardware should work, only that the hardware needs to support certain features. Past that, it’s up to the individual hardware vendors to create their own backends for executing DXR commands. As a result – and especially as this is so early – everyone from Microsoft to hardware vendors are being intentionally vague about how hardware acceleration is going to work.

At the base level, DXR will have a full fallback layer for working on existing DirectX 12 hardware. As Microsoft’s announcement is aimed at software developers, they’re pitching the fallback layer as a way for developers to get started today on using DXR. It’s not the fastest option, but it lets developers immediately try out the API and begin writing software to take advantage of it while everyone waits for newer hardware to become more prevalent. However the fallback layer is not limited to just developers – it’s also a catch-all to ensure that all DirectX 12 hardware can support ray tracing – and talking with hardware developers it sounds like some game studios may try to include DXR-driven effects as soon as late this year, if only as an early technical showcase to demonstrate what DXR can do.

In the case of hitting the fallback layer, DXR will be executed via DirectCompute compute shaders, which are already supported on all DX12 GPUs. On the whole GPUs are not great at ray tracing, but they’re not half-bad either. As GPUs have become more flexible they’ve become easier to map to ray tracing, and there are already a number of professional solutions that can use GPU farms for ray tracing. Faster still, of course, is mixing that with optimized hardware paths, and this is where hardware acceleration comes in.

Microsoft isn’t saying just what hardware acceleration of DXR will involve, and the high-level nature of the API means that it’s rather easy for hardware vendors to mix hardware and software stages as necessary. This means that it’s up to GPU vendors to provide the execution backends for DXR and to make DXR run as efficiently as possible on their various microarchitectures.  When it comes to implementing those backends in turn, there are some parts of the ray tracing process that can be done in fixed-function hardware more efficiently than can be done shaders, and as a result Microsoft is giving GPU vendors the means to accelerate DXR with this hardware in order to further close the performance gap between ray tracing and rasterization.

DirectX Raytracing Planned Support
Vendor Support
AMD Indeterminate - Driver Due Soon
NVIDIA Volta Hardware + Software (RTX)
NVIDIA Pre-Volta Software

For today’s reveal, NVIDIA is simultaneously announcing that they will support hardware acceleration of DXR through their new RTX Technology. RTX in turn combines previously-unannounced Volta architecture ray tracing features with optimized software routines to provide a complete DXR backend, while pre-Volta cards will use the DXR shader-based fallback option. Meanwhile AMD has also announced that they’re collaborating with Microsoft and that they’ll be releasing a driver in the near future that supports DXR acceleration. The tone of AMD’s announcement makes me think that they will have very limited hardware acceleration relative to NVIDIA, but we’ll have to wait and see just what AMD unveils once their drivers are available.

Though ultimately, the idea of hardware acceleration may be a (relatively) short-lived one. Since the introduction of DirectX 12, Microsoft’s long-term vision – and indeed the GPU industry’s overall vision – has been for GPUs to become increasingly general-purpose, with successive generations of GPUs moving farther and farther in this direction. As a result there is talk of GPUs doing away with fixed-function units entirely, and while this kind of thinking has admittedly burnt vendors before (Intel Larrabee), it’s not unfounded. Greater programmability will make it even easier to mix rasterization and ray tracing, and farther in the future still it could lay the groundwork for pure ray tracing in games.

Unsurprisingly then, the actual DXR commands for DX12 are very much designed for a highly programmable GPU. While I won’t get into programming minutiae better served by Microsoft’s dev blog, Microsoft’s eye is solidly on the future. DXR will not introduce any new execution engines in the DX12 model – so the primary two engines remain the graphics (3D) and compute engines – and indeed Microsoft is treating DXR as a compute task, meaning it can be run on top of either engine. Meanwhile DXR will introduce multiple new shader types to handle ray processing, including ray-generation, closest-hit, any-hit, and miss shaders. Finally, the 3D world itself will be described using what Microsoft is terming the acceleration structure, which is a full 3D environment that has been optimized for GPU traversal.

Eyes on the Future

Like the announcement of DirectX 12 itself back in 2014, today’s announcement of DirectX Raytracing is meant to set the stage for the future for Microsoft and its hardware and software partners. Interested developers can get started with DXR today by enabling Win10 FCU’s experimental mode. Meanwhile top-tier software developers like Epic Games, Futuremark, DICE, Unity, and Electronic Arts’ SEED group are already announcing that they plan to integrate DXR support into their engines. And, as Microsoft promises, there are more groups yet to come.


Project PICA PICA from SEED, Electronic Arts

Though even with the roughly one year head start that Microsoft’s closest developers have received, my impression from all of this that DXR is still a very long-term project. Perhaps even more so than DirectX 12. While DX12 was a new API for existing hardware functions, DXR is closer to a traditional DirectX release in that it’s a new API (or rather new DX12 commands) that go best with new hardware. And as there’s essentially 0 consumer hardware on the market right now that offers hardware DXR acceleration, that means DXR really is starting from the beginning.

The big question I suppose is just how useful the pure software fallback mode will be; if there’s anything that can meaningfully be done on even today’s high-end video cards without fixed-function hardware for ray tracing. I have no doubt that developers will include some DXR-powered features to show off their wares early on, but making the best use of DXR feels like it will require hardware support as a baseline feature. And as we’ve seen with past feature launches like DX11, having a new API become the baseline will likely take quite a bit of time.

The other interesting aspect of this is that Microsoft isn’t announcing DXR for the Xbox One at this time. Windows and the Xbox One are practically tied at the hip when it comes to DX12, which makes me wonder what role the consoles will have to play. After all, what finally killed DX9 in most respects was the release of the 8th gen consoles where DX11/12 functionality was a baseline. So we may see something similar happen with DXR, in which case we’re talking about a transition that’s 2+ years out.

However we should hopefully get some more answers here later this week at GDC. Microsoft is presenting a couple of different DXR-related sessions, the most important of which is DirectX: Evolving Microsoft's Graphics Platform and is being presented by Microsoft, DICE, and SEED. So stay tuned for more news from GDC.

Update:

Some of the game developers presenting about DXR this week have already posted trailers of the technology in action.

Source: Microsoft

Comments Locked

34 Comments

View All Comments

  • Sivar - Monday, March 19, 2018 - link

    Raytracing is not fundamentally non-realtime. Hardware today is fast enough to do some of the raytrace work done with batch encoding in the 90's. There were even 4k demos (4K file size, not resolution) that did real-time raytracing on Pentium-1 level computers. The scenes weren't as complicated as, say, a battle in Warframe, but one I remember from the 90's that had about a dozen liquid blobs floating around and merging with each-other as liquid in zero gravity tends to do. Granted, it was hand-tuned assembly language done by a graphics/coding genius, but it shows that realtime RT is possible.
  • HStewart - Monday, March 19, 2018 - link

    Yes back then graphics especially like in game like Doom - was directly to hardware - I was going though my closets and found this old book on technology.

    https://www.amazon.com/Zen-Graphics-Programming-Ul...
  • bji - Monday, March 19, 2018 - link

    Yes that's the book commonly referred to as "Abrash". Seminal work.
  • Yojimbo - Tuesday, March 20, 2018 - link

    In the past, it was preferable for games to apply greater computation ability towards more complicated raster techniques rather than to raytracing. Now the industry seems to have decided that it makes sense to start to apply future improvements in computation ability to raytracing instead. This could mean that innovation in raster techniques is slowing down, or that demands from VR are causing a shift in thinking, or both.
  • AndrewJacksonZA - Tuesday, March 20, 2018 - link

    4K demos, huh?

    "Following people who write stuff for old machines and folk who write stuff for new machines means occasional confusion over who is doing what with 4k." - @RetroRemakes
    https://twitter.com/retroremakes/status/9685809015...
  • Kevin G - Monday, March 19, 2018 - link

    I wonder if Imagination's Caustic RT hardware is supported. They have had ray tracing accelerators for awhile now.
  • Alexvrb - Monday, March 19, 2018 - link

    They were ahead of their time... again. Pity. I wish they had fought it out some more in discrete PC graphics. I owned a Kyro I and a Kyro II, and if they had released a Kyro III I might have ended up with one of those too.
  • StevoLincolnite - Monday, March 19, 2018 - link

    They were efficient as well. Shame they lacked support for things like T&L natively in hardware though.

    The Matrox Parhelia I had allot of hope for at one point as well... And I thought S3 Chrome was going to finally bring a 3rd competitor into the limelight.

    I guess competing against AMD and nVidia in the graphics space is a tall order, they have been refining their hardware and software stacks for decades now and have a cadence nailed down.
  • Dragonstongue - Monday, March 19, 2018 - link

    I wonder if MSFT is putting some clever BS "behind the scenes" to make sure that Intel and Nvidia "perform better" instead of being truly hardware/software agnostic (that is, basically the better the cpu or gpu can handle it the better it will, rather than putting crappy backhanded ways of making sure someone will always do it "better" while the "others" have to play by other rules, like, tessellation, where Nv pulled a hissy and made MSFT give them the "advantage" at the cost of AMD suffering a performance hit, even after AMD/Radeon did all the leg work and $$$$$$$ in supporting it for many years PRIOR)

    anyways, will be interesting to see, but, how many will actually use it, especially at its "best" I doubt many will, devs are "simple" folks, or at least those whose names are attached to the devs who want the product out the door asap even if incomplete (like EA and all their studios who want to make the best they can but are short changed on time or funding to make it awesome as it should be)
  • ಬುಲ್ವಿಂಕಲ್ ಜೆ ಮೂಸ್ - Monday, March 19, 2018 - link


    No need to wonder....

    The past will tell you the future!

    It will be a locked down, DRM'd, Proprietary existence where only a rigged game determines who gets to win every single time

    The rest of you will get to lose forever and ever, Amen

    You may not be able to handle the truth......but you can at least TRY!

Log in

Don't have an account? Sign up now