A [[Shadow]] [[Bitmap Image|Map]] is a [[Bitmap Image|Texture]] that encodes the coverage of shadows from a [[Electromagnetic Radiation|Light]] source.
The general notion of shadow mapping is you render a scene from the perspective of a [[Electromagnetic Radiation|Light]] into an image, so when you render for the camera - any [[Fragment]] that is not visible from the [[Electromagnetic Radiation|Light]] must be in shadow (using [[Depth Buffer|Depth]] information).
>[!info] Pros & Cons
>*Pros*:
>- Works with any [[Geometry]]
>- Simple to implement
>- Fairly efficient for real time
>
>*Cons*:
>- Memory requirement, one shadow map is needed per [[Electromagnetic Radiation|Light]]
>- $z$-buffer accuracy can cause issues ([[Z Fighting]])
>- [[Perspective Aliasing]]
>- No soft shadows
## Procedure
The usage of shadow maps follows a two-pass procedure.
### Generation
For shadows for a given light $L$, we generate a shadow map by rendering the scene from that lights locations. The depth information from this render is what constitutes the shadow map.
>[!note]
>When we render for the light, we only render objects are should be occluded, and we do not use any fancy fragment shader as all we care about is the depth information.
To construct a light project matrix
- Surround the light source with a cube in [[View Space|Camera Space]] with range $[-1,1]$
- For each [[Face]], construct light-view and light-projection matrices $V_{L}$ and $P_{L}$ as [[View Frustum|projection planes]].
- [[Object Space]]
- [[World Space]] $M_{O}$
- Light [[View Space]] $V_{L}$
- Light [[Clip Space|Projection Space]] $P_{L}$
- Shadow Space $B$
### Look-Up
Render the scene from the camera's position
For each [[Pixel]], look up the coresponding location in the shadow map and compare depths ($z_p$ is the pixels depth, $z_{s}$ is the shadow map).
- If $z_{p} < z_{s}$ then the pixel is lit, and we can use the [[Lighting Model]]
- If $z_{p} \geq z_{s}$ then the pixel is not lit, we only apply [[Ambient Light]].
We convert the fragments position in [[Normalised Device Coordinates|NDC Space]] into [[Shadow Space]] by:
- Reversing the [[Object to Device Transform]] $M_{c}^{-1}M_{p}^{-1}\Delta_{v}^{-1}$
- Applying the shadow transformation $V_{L}P_{L}B$
In practice, the shadow space [[Texture Coordinates]] are computed and saved per vertex during the first pass. These coordinates are then [[Hyperbolic Interpolation|interpolated]] across triangles during the [[Rasterization]] stage in the second pass.
## Improvements
### Z-Fighting / Depth Inaccuracy
Because of the limitation of [[Floating Point Number|floats]], depth at at some location in a shadow map may differ from its actual depth. Because of this, there can be cases where pixels are misidentified as being unlit which causes $z$-fighting.
*Possible solutions*:
- Increase depth map accuraccy ([[Floating Point Number]] into [[Double Floating Point Number]])
- Restrict the [[Near Clipping Plane|Near]] and [[Far Clipping Plane]] for the light-view frustum
- Render *only* occluder back faces in light space
- When rendering the map, move the geometry farther from the light by a small offset (called a [[Depth Bias]])
### Perspective Inaccuracy
Receivers at different distances from the [[Human Eye|Eye]] may suffer from [[Perspective Aliasing]], where the number of pixels mapped to a shadow [[Texel]] differs significantly based on the distance to the eye [[Point|point]].
A possible solution is [[Cascaded Shadow Maps]], where we have different LOD's for objects. Objects nearer to the eye require a higher resolution than more distant objects.This is done by splitting the [[View Frustum]] into multiple frusta with different resolutions.