Assassin’s Creed Shadows marks a significant technological evolution for Ubisoft’s flagship series, with a revitalised Anvil Engine that taps into some of the latest rendering technologies, alongside other innovative systems. We’ve already seen how transformative the new game’s real-time ray-traced global illumination (RTGI) system is, replacing an older, pre-computed or ‘baked’ GI solution that first debuted over ten years ago in DF favourite, Assassin’s Creed Unity. However, the new engine goes further – much further.
Before Shadows was delayed last year, we were offered the opportunity to spend time with the game’s rendering engineers to discuss how Anvil has evolved and finally, earlier this week, we had the chance to talk directly to the team. As you’ll see in the embedded video on this page and the tech interview below, we get to grips with the key enhancements in the new game. Ray tracing takes centrestage, with RTGI delivering a per-pixel lighting system that dynamically adjusts to the game’s expansive, seasonally shifting open world, delivering realistic effects like bounce light from moving sources and light filtering through materials such as Japanese shoji dividers. Additionally, ray-traced reflections enhance water and glossy surfaces with a subtle sheen.
Although it’s not mentioned in the interview, the Atmos system is covered in the video – and it’s great stuff, simulating weather and wind based on environmental factors like humidity and temperature. This dynamism affects the world procedurally: trees sway, hair moves, and particles like leaves or snow react to wind. Interactivity is further emphasised with a physics-driven destruction system, enabling players to break bamboo, cut through thin walls, or slice cloth, echoing the environmental interactivity of games from the mid-2000s, such as the epoch-making Crysis.
Visually, the micropolygon geometry system – similar to Unreal Engine’s Nanite – eliminates level-of-detail (LOD) popping for rigid objects like buildings and rocks. While not yet applied to vegetation or characters, future expansions are planned to enhance fidelity further. That’s just a broad overview of what we learned about the new game – be sure to check out the video below to see how all of this technology plays out. Meanwhile, many thanks to Engine Technical architects Nicolas Lopez and Pierre Fortin, along with Project Lead Programmer Rendering, Sébastien Daigneault for spending (a great deal of) time with us in discussing the game and putting this coverage together. As always, the remarks below are lightly edited for clarity and brevity.
We’ve seen how ray traced global illumination is a game-changer in AC Shadows – how does the RTGI function at a technical level?
Our hybrid RTGI system combines two steps: per-pixel ray tracing (screen space rays and world space rays) and DDGI-like probe cascades. We first trace per-pixel rays in screen space, trying to solve RT intersections without having to use costly hardware rays. If no hit is found in screen space, we continue and ray trace in world space. These are DXR rays, and they traverse the acceleration structure (BVH) until they hit or miss. All the hits are then relit in a deferred manner to produce the first bounce. Then the result is summed to our ray-traced probes that act as an irradiance cache to bring subsequent bounces, and the result is finally denoised.
How are dynamic lights beyond the sun and the moon handled?
With direct lighting, we use tile z-binning to handle local lights, a technique presented at Siggraph by Activision. Instead of partitioning the frustum into 3D clusters, this decomposes it into 2D tiles and 1D z-bins, which provided a 10 percent speed-up in lighting cost for AC Shadows. In RT though, we insert dynamic lights into an omnidirectional clustered lighting structure, which works like traditional clustered lighting but the clustered volume is mapped onto a uniform grid around the camera instead of onto frustum voxels. With this, we observed a 10x speed-up versus more naive implementations.
How are transparencies shaded? What about multiple bounces? And how is specularity handled?
GI is applied to transparent objects using RT probes only when running with RTGI. The first bounce comes from per-pixel RT and is what we call the secondary ray. Subsequent bounces come from the RT probes that act as an irradiance cache. For specular reflections, most platforms will use a combination of screen space reflections, gbuffer local cube maps (baked, but relightable at runtime) and a dynamic cube map at the player position as a fallback. On PC and PS5 Pro though, specular RT is available to replace those three systems and improve the overall fidelity. Specular RT is very similar to diffuse RT in terms of design, although we had to iron out some BVH quality issues that were otherwise barely noticeable.
What does the BVH structure (the geometry against which rays are traced) look like, what’s included and how is it updated?
We include only static opaque geometry in the BVH. It includes alpha tested geometry, such as vegetation, but also dynamic moving objects that are not skinned (non deformed). For tree leaves, we use a trick to fake alpha test with scaled opaque triangles. It sounds counterintuitive but it works very well in practice and is a very good approximation for specular RT as well. With this we save the massive cost of DXR “any hits” while still producing renders close to the reference.
What GI solution is used for the open world, versus the hideout?
We use a baked GI open world on console modes that don’t include RTGI. For PCs that don’t support hardware RT, we developed a custom compute-shader-based software RT solution that allows for a fully dynamic hideout without compromising on quality. The scoped/sandboxed nature of the hideout means we can afford the higher GPU cost of this system here. We developed this after the initial delay of AC Shadows, as we took a step back and analysed where we could offer a more polished experience for players. The same tech is also used on Steam Deck.
Why does the game ship with two GI solutions rather than being RT-only, given the likes of Indiana Jones and Avatar: Frontiers of Pandora?
The simple reason is that we believe forcing RT on players today, notably on PC but also in performance modes on consoles, compels us to make qualitative sacrifices that we were simply not willing to take. We’re proud of the baked GI system that we’ve continuously improved from AC Unity – with time of day in AC Syndicate, sparse GI in AC Origins, multi-state GI in AC Valhalla and seasons in AC Shadows.
It’s a system that has a cheap run-time cost but produces great results, at the expense of build complexity. This allows us to use our limited GPU budget in 60fps scenarios on console on other features, like procedurally simulated trees and vegetation. We know performance modes are important to players and it will be a popular mode.
On the other hand, RTGI and RT reflections have undeniable advantages, and it was natural for us to develop this for Shadows and future titles. We believe that the game offers a real dilemma in terms of choosing between quality or performance mode. It would have been easier for us to develop a single GI solution, but we want to maximise the reach of the game, even including Steam Deck and portable PC hardware as well.
What’s the tech basis for the hair strands system; how are shading, shadowing and anti-aliasing handled?
Geometrically, hair strands are camera-facing triangle strips. If, due to perspective, the strand width needs to be under one pixel, it will decrease the strand opacity while keeping the same width. A character haircut has a variable number of strands, which decrease with the distance and position of the camera. The lighting is computed in fewer samples along the strand than the number of vertices and also decreases with camera distance. For scalability, only a certain percentage of the strands are lit this way, while the rest interpolate from the previous results.
In terms of physics, we simulate a maximum of 32 points per strand for one to five percent of the total strands – what we call guide strands – and interpolate the rest using barycentric weights from the three nearest guides.
For hair shading, we use the Marschner model, which has been present in the engine since Ghost Recon Wildlands, and a dual scattering model. The implementation is based mostly on the original 2008 paper from Zinke, Yuksel et al., with some approximations taken from the 2010 Sadeghi and Tamstorf Disney paper.
For shadowing, we have a custom atlas for higher quality and per strand shadows which implements a filtered deep opacity map. Also, for scalability, we have a distance absorption approach and a simple depth map approach. For anti-aliasing we have “phone-wire” antialiasing for strands, and MSAA and bilinear filtering for the hair strand impostors. As the strands are rendered with alpha blending, we have a custom order-independent transparency algorithm which uses four depth layers and a sample weight blending inside each layer, inspired from the 2013 Weighted Blended OIT paper by Morgan McGuire and Louis Bavoil.
What anti-aliasing and/or image reconstruction technique is used on console?
We use our own TAA implementation that we’ve continuously refined – though we’ll add support for PSSR on PS5 Pro in a future update.
Why did you not choose to use FSR 2 or 3 on Xbox Series X and PS5, and initially not choose to offer PSSR on PS5 Pro?
Simply speaking, the cost of FSR compared to our proprietary TAA implementation was the decision factor. We have a higher pre-upscale resolution compared to FSR.
In terms of PSSR, we analysed PSSR and TAA dat the time of shipping AC Shadows and found better results with our TAA solution. However, PSSR is a new upscaler and after an in-depth collaboration with Sony over the past few months, many issues have been addressed, mostly around moving vegetation and water – problems that other titles have faced. We’re now confident that the image quality with PSSR will always be better than with TAA.
Why limit cutscenes to 30fps?
There are several reasons. At cutscenes are more contemplative, we believe increased resolution is more beneficial than increased frame-rate. This allows us to activate costly GPU features no matter the performance mode selected, with hair strands and depth of field effects adding a lot of quality to cinematics and dialogues. We’re also pushing shadow resolution quality and dozens of other parameters. A stable 30fps also provides more realistic cloth movement and ensures physics-based systems respond more deterministically and predictably for a more polished experience. Finally, frame generation could bring unwanted artefacts to cutscenes that would be less noticeable in gameplay, so we don’t use frame generation here.
With PS5 Pro specifically, why are RT reflections are present in quality mode but not in balanced mode?
RT reflections were developed after the initial delay to AC Shadows. Given the shift in our release window, we wanted to make sure we remained competitive on high-end PCs and PS5 Pro. We had an extremely motivated team that had learned a lot developing the RTGI solution, and felt confident in achieving the objective of shipping RT reflections with a 30fps performance target on Pro. For launch, our priority was quality mode specifically to ensure it didn’t prompt graphical artefacts or other issues. We’ve continued to optimise RT reflections and the game in general, and we’re happy to say that a future update will enable RT reflections on PS5 Pro’s balanced mode.
One of our biggest issues with PC games is stutter, principally from on-the-fly shader compilation. What’s your strategy to avoid this?
This is a challenge that all DX12 games face on PC. The general idea is to “pre-warm” PSOs before entering gameplay. For that we’re using our so- called PSODB (PSO database); in a nutshell it’s a list of PSO descriptions. The hard part is figuring out what to put in the PSODB, and what to skip. As you might imagine we have a huge number of materials and techniques, hence a huge number of shaders, and more importantly shader permutations – not all of which are used in the game. Putting every shader permutation in a PSODB will lead to a very long shader precompilation step on first boot which we want to avoid.
Instead, we’re doing two things: for those shaders that were created by programmers we can figure out the exact permutations that are useful, so we add them to PSODB in a somewhat manual way. But it’s not possible to do so for those shaders that are crafted by the artists with a shader graph. For that, we use statistics that we gather each time someone plays a development version of the game, then a special process running overnight on our build farm gathers those stats, refines them and updates the PSODB. Thus, the game self-corrects during the QC process.
Upon first running the game, and after each driver update, we build the PSO descriptions from the PSODB, and we do this compilation as soon as the graphics hardware is initialised, during the game startup sequence. This system was introduced in AC Valhalla, and we believe we were the first to do so at the time in the industry.