"A Trip Down the LoL Graphics Pipeline"

They describe what goes into rendering one frame in LoL, but they explain lots of really basic stuff, so I think it should be pretty accessible even if you know almost nothing about graphics programming. :)

https://engineering.riotgames.com/news/trip-down-lol-graphics-pipeline

Comments

  • edited
    @AngryMoose: Do you have any idea how the RGB in their shadow re is used? I mean, he says it stores the distance from the light to the shadow-casting geometry. I just don't understand why they'd need that. I feel like if you know the location of your light, and the size of its projection, you have all the information you need for looking up the shadow texture, and all you need is one channel?

    RGB:
    image

    A:
    image
  • Haven't read the link but immediate idea would be to use it to possibly attenuate a shadow falloff / opacity based on distance. Or perhaps as a weird trick to add a simulated penumbra to the standard umbra, as surfaces closer to the light would have sharper shadow edges than blurrier further ones, etc.

    Or maybe that RGB data is used in a some sort of other lighting pass and is not related to the shadow at all, but it is just convenient to pre calculate it there?

    But yeah, standard shadow buffers are just a single F16 or whatever to store that caster depth, so irunno? :)

    I'll give it a read when I have a chance :) Looks like some great info to learn from!
    Thanked by 1Elyaradine
  • @Elyaradine after researching a bit it seems you can't render out the depth buffer to a texture with dx9 for whatever reason (I'm not super comfortable with direct x so I don't really know why this limitation would be in place). If you're unfamiliar with the graphical pipeline (specifically with why the depth buffer is needed for shadow mapping) don't hesitate to ask :)
    Thanked by 1Elyaradine
  • @AngryMoose: I thought it might be those at first, but when you look at the LoL shadows, they seem quite low tech; they only seem to get lit by 1 directional light (sun), so that attenuation and variable blurring don't seem to be happening.

    @IceCliff: Oh, that does make sense, thanks! Anything before your depth shouldn't sample your shadow map, and anything after should, right? That seems unnecessary though, because given the kind of game League is, I'm struggling to imagine a case where two parts of the environment might both sample the same part of the shadow map for double shadowing. (Unless they just do this anyway "just in case" or for future-proofing.)
  • @Elyaradine, well you do get trees and things like that in League still. So if I then render my character and find it has a higher depth at those coordinates than their y value it means it should be darkened because it means there is something above it. But when I render my tree it won't be dark because the depth value is the tree's y value.
  • What interested me about this post was that they are using OpenGL1.5 for mac which was already legacy when the game came out. I'm having trouble understand why they would do this. Possibly Apple has bad support for OpenGL.
  • edited
    But trees' shadows are baked. Only characters and towers cast shadows, and not on each other (as far as I've noticed, though I haven't checked for that specifically).
  • But even so, those baked textures will also surely have depth values? By using the depth value for the dynamic shadow shadow map you can still perform the same calculations :)
  • No, they don't necessarily need to have depth values.

    In the vast majority of games, baked lighting is stored in a lightmap. The way it's encoded differs per platform, but typically you're using your RGB for light colour (and GI colour), and your A, if it's used (which League doesn't need for its art direction), for scaling your light intensity to achieve HDR-like values. You look these textures up with a second channel of UVs.
    Thanked by 1AngryMoose
  • Well I'm just taking a swing at why they would possibly use them :) Can't really think of another reason besides comparing depths so possibly they have bird like characters or something. I've only been doing engine development for a little over half a year though so I'm sure there's definitely something I could be missing.
  • @Elyaradine: Still haven't had read the article, but will later today! Thinking more and doing a bit of Googling though, it seems likely that they do this for pre-DX10 compatibility. So they could be using the A as a potential variable shadow color (to allow lighter shadows per-caster if desired, even if it is not apparent in the above image). Then they store the DISTANCE in RGB instead of the DEPTH (which is going to give you precision issues if you don't use FP RTs) to get around the following issue with Floating Point RTs in DX9:
    SteveStreeting.com said:
    You see, the problem is that DirectX 9 can only clear a viewport to a 32-bit number. When clearing a floating-point surface, it has to map this simple integer range onto a floating point range, and it effectively does it by dividing each channel clear colour by 255. This means it can’t clear floating point textures to any number higher than 1.0! When clearing the frame buffer for a shadow texture that stores depths, you need to initialise it to the highest depth value possible so that rendered objects will update it to be ‘closer’. If you’re storing raw unscaled depth, that value needs to be the light’s attenuation range or some other far scene distance. You simply can’t do that in Dx9, so what you find is that your texture contains all 1.0’s in initialised areas. You might think this isn’t so bad, since provided at least one thing is rendered at any particular point, the floating point buffer will be right. That’s true, except that if you have any single-sided geometry (terrain or a ground plane), and you use the default ‘render back faces to shadow texture’ option (highly recommended to make biasing much simpler), you can have significant problems."
    This means that they would then use that linear distance (which would be easier to be cleared) instead of a linear or logarithmic depth value that is pre-calculated, when they are doing their main camera depth comparison check. There are other ways to handle this (ie. scaled or clipped depth calculations), but they could produce precision issues as I've mentioned above. It's storing distance and then doing depth calculations on-the-fly in the shader provides better shadow precision/quality :)

    I could definitely be wrong here, and @Chippit probably has better insight than me to be honest, but that's my New And Improved Gut Feel Theory Given Minimal Research And Still Having Not Read The Original Artical(tm).
    IceCliff said:
    @Elyaradine after researching a bit it seems you can't render out the depth buffer to a texture with dx9 for whatever reason (I'm not super comfortable with direct x so I don't really know why this limitation would be in place). If you're unfamiliar with the graphical pipeline (specifically with why the depth buffer is needed for shadow mapping) don't hesitate to ask :)
    That doesn't alone doesn't create the need to store distance values in the RGB. Most games *don't* just render the to depth buffer and use that as a texture, regardless of platform (the reason pre-DX10 and other APIs don't allow this is that the Depth Buffer is stored in a swizzled format for optimal read/write on the GPU, and it is costly to decode that back into a standard linear texture... some vendors allow this with extensions, but it isn't standard, and it results in a slower depth buffer performance anyways).

    Pretty much all games project their shadow maps as a separate pass from the light source where all objects are rendered with a custom lightweight shader that only calculates "depth" (or some other variation of it) and writes it out to a separate FP16 (or whatever format you want) render target. Then when the main camera is rendered, it does that same light-space-depth calculation per-pixel, and then compares it to the shadow buffer, and boom, you know if that pixel is in shadow or not.
    IceCliff said:
    But even so, those baked textures will also surely have depth values? By using the depth value for the dynamic shadow shadow map you can still perform the same calculations :)
    As @Elyaradine mentioned, this wouldn't come into play because baked textures do not (that I have ever, ever seen) track depth. They are merely 2D textures that store color (standard or HDR) and are then mapped to the world via a 2nd set of UVs, and blended accordingly. There is no need for them to have depth info as they are static baked and do not change dynamically.

    Thanked by 1IceCliff
  • @AngryMoose Oh would you suggest to rather storing out distance to a texture for performance reasons? I'm currently writing out my depth values to a texture and haven't felt any performance hit but of course lower end computers could possibly be feeling the ill effects of this?
  • edited
    No, calculating and storing depth on the shadow pass(es) will be more performant as you wont need to re-calculate the depth for the shadow comparison. Distance (XYZ in RGB) could be a potential way to handle the issue I mentioned above with DX9 floating point textures and non-normalized depth values.
  • @Elyaradine: Quick followup after finally reading the article! They also mention that they do a BLUR pass on the shadow (ie. Alpha channel) for soft shadows, which is also something that you can't do if you're just storing depth... you can't *blur depth* :) So by storing a shadow mask (like we've done before with 3E and BS :)) it allows you to run posts on it, like your 5x5 Gaussian kernel, etc. to make softer shadows.

    So this technique, to not just store depths but to do a mask (which can then be blurred) + distance, which is then translated to a light-space depth during the shadow pixel application sample, sounds like the most likely reason for this way of doing what they are with a standard RGBA32 "shadow" buffer, vs. a more "standard" FP16 depth based shadow buffer technique :)

    "Standard" shadow mapping with a shadow buffer achieves blurring generally with multiple tapped PCF filter samples (the more, the blurrier), and can be suuuuuper costly on lower (ie. mobile) devices (which is why, say, Unity "soft" shadows are poop on mobile :)).
    Thanked by 2mattbenic IceCliff
  • @AngryMoose: Great, thanks! In terms of my original question though -- I feel like you could do this all without actually even storing the depth in the first place, given the kind of game League is. (i.e. It's a fairly flat play area that you view from almost top-down, and there's only one static, directional light. There are parts of the terrain that cast shadows on other parts of the terrain, but that's all baked.) Is there something I'm missing?
  • edited
    If the shadows are only on "the ground", you could probably get away with it. If there is any sort of self shadowing, you need depth though. So if the characters cast onto themselves or onto other casters (basically, if any caster can also be a receiver), you need that depth info AFAIK.
    Thanked by 1Elyaradine
Sign In or Register to comment.