>[!note]
>This is a planning document for #digipen/gam300
## References
- [jam2go Datamosh Particles](https://www.youtube.com/shorts/ug1CjZtvagg)
## Implementation
The implementation is going to be through an [[Unreal Engine]] [[../Unreal/SceneViewExtension|SceneViewExtension]] as a postprocessing effect that will happen *before* [[Tonemapping]], somewhere around where [[Motion Blur]] would be applied.
The effect will take use of Unreal's [[Motion Vectors]] Texture, where each [[../Computer Science/Fragment|Fragment]] corresponds the velocity of the pixel on the screen.
Using this, there will some canvas $C$ that is stored throughout frames and the input to our postprocessing every frame $S$. When datamoshing is applied, we copy the current contents $S \to C$, and it subsequent frames, we offset each pixel in C by the corresponding motion vector [[Field]] $V$ in the [[Fragment Shader|Fragment]] (or compute [[../Computer Science/Shader|Shader]]).
$\huge C_{x,y}^{n+1} = C^n_{ (x,y) + V_{x,y} } $
And when active, the output scene texture would just match the canvas.
$\huge S^n_{x,y} = C_{x,y} $
For stylization however, I was thinking of bluring the velocity texture / sampling it with such a [[Convolution|Kernel]] that could give more interesting results, with the results being the [[Vector Field]] $U$.
>[!example] Gaussian Blur (may give a smoother field)
>$
>\begin{align}
>U&=\mat{
>G(x-1, y-1)& G(x, y-1)& G(x+1,y-1)\\
>G(x-1, y)& G(x, y)& G(x+1,y)\\
>G(x-1, y+1)& G(x, y+1)& G(x+1,y+1)
>} * V_{x,y}\\
>G(x, y) &= \frac{1}{\sqrt{ 2\pi \sigma^2 }} e^{ -\frac{x^2+y^2}{2\sigma^2} }
\end{align}
>$
#### Toggle
When datamoshing is active, we copy the contents
## Uses
- Dead body ragdolls
Jam2Go (who also has a very god video about datamoshing that inspired me) has a short showing off applying the effect to particles:
[Full Video Here](https://www.youtube.com/shorts/ug1CjZtvagg)