"A Trip Down the LoL Graphics Pipeline"
They describe what goes into rendering one frame in LoL, but they explain lots of really basic stuff, so I think it should be pretty accessible even if you know almost nothing about graphics programming. :)
https://engineering.riotgames.com/news/trip-down-lol-graphics-pipeline
https://engineering.riotgames.com/news/trip-down-lol-graphics-pipeline
Comments
RGB:
A:
Or maybe that RGB data is used in a some sort of other lighting pass and is not related to the shadow at all, but it is just convenient to pre calculate it there?
But yeah, standard shadow buffers are just a single F16 or whatever to store that caster depth, so irunno? :)
I'll give it a read when I have a chance :) Looks like some great info to learn from!
@IceCliff: Oh, that does make sense, thanks! Anything before your depth shouldn't sample your shadow map, and anything after should, right? That seems unnecessary though, because given the kind of game League is, I'm struggling to imagine a case where two parts of the environment might both sample the same part of the shadow map for double shadowing. (Unless they just do this anyway "just in case" or for future-proofing.)
In the vast majority of games, baked lighting is stored in a lightmap. The way it's encoded differs per platform, but typically you're using your RGB for light colour (and GI colour), and your A, if it's used (which League doesn't need for its art direction), for scaling your light intensity to achieve HDR-like values. You look these textures up with a second channel of UVs.
I could definitely be wrong here, and @Chippit probably has better insight than me to be honest, but that's my New And Improved Gut Feel Theory Given Minimal Research And Still Having Not Read The Original Artical(tm).
That doesn't alone doesn't create the need to store distance values in the RGB. Most games *don't* just render the to depth buffer and use that as a texture, regardless of platform (the reason pre-DX10 and other APIs don't allow this is that the Depth Buffer is stored in a swizzled format for optimal read/write on the GPU, and it is costly to decode that back into a standard linear texture... some vendors allow this with extensions, but it isn't standard, and it results in a slower depth buffer performance anyways).
Pretty much all games project their shadow maps as a separate pass from the light source where all objects are rendered with a custom lightweight shader that only calculates "depth" (or some other variation of it) and writes it out to a separate FP16 (or whatever format you want) render target. Then when the main camera is rendered, it does that same light-space-depth calculation per-pixel, and then compares it to the shadow buffer, and boom, you know if that pixel is in shadow or not.
As @Elyaradine mentioned, this wouldn't come into play because baked textures do not (that I have ever, ever seen) track depth. They are merely 2D textures that store color (standard or HDR) and are then mapped to the world via a 2nd set of UVs, and blended accordingly. There is no need for them to have depth info as they are static baked and do not change dynamically.
So this technique, to not just store depths but to do a mask (which can then be blurred) + distance, which is then translated to a light-space depth during the shadow pixel application sample, sounds like the most likely reason for this way of doing what they are with a standard RGBA32 "shadow" buffer, vs. a more "standard" FP16 depth based shadow buffer technique :)
"Standard" shadow mapping with a shadow buffer achieves blurring generally with multiple tapped PCF filter samples (the more, the blurrier), and can be suuuuuper costly on lower (ie. mobile) devices (which is why, say, Unity "soft" shadows are poop on mobile :)).