A Lagrange [[Polynomial]] is a type of [[Interpolation]] method where we interpolate between $n$ [[Point|points]] with a degree $n-1$ [[Polynomial]].
$\huge \begin{align}
L_{i}(x) &=
\prod_{j=1,j \neq i}^{n}
\frac{x-x_{j}}{x_{i}-x_{j}} \\
P(x) &= \sum_{i=1}^{n} y_{i}L_{i}(x)
\end{align}
$
### Derivation
Given a [[Set|set]] of $n$ points $\set{(x_{1},y_{1}), \dots, (x_{n},y_{n})}$.
Let $P(x)$ be the $n-1$ degree [[Polynomial]] such thta $P(x_{i})=y_{i}$.
$\huge P(x) = \sum_{j=0}^{n-1} a_{j}x^{j} $
Solving for the coefficients of the polynomial gives us a [[System of Linear Equations]].
$\huge \begin{cases}
a_{n-1}x_{1}^{n-1} + a_{n-2}x_{1}^{n-2} +\cdots + a_{0} &= y_{1} \\
a_{n-1}x_{2}^{n-1} + a_{n-2}x_{2}^{n-2} +\cdots + a_{0} &= y_{2} \\
&\vdots\\
a_{n-1}x_{n}^{n-1} + a_{n-2}x_{n}^{n-2} + \cdots + a_{0} &= y_{n}
\end{cases} $
We can think of the [[Set|set]] of all [[Polynomial|polynomials]] as a [[Vector Space]].
We can now construct, for each constraint point $x_i$, a $n-1$ degree [[Polynomial]]:
$\huge \begin{align}
L_{1}(x) = \frac{(x-x_{2})(x-x_{3})\cdots(x-x_{n})}{
(x_{1}-x_{2})(x_{1}-x_{3}) \cdots(x_{1}-x_{n})
}
\end{align}$
$\huge
L_{i}(x) =
\prod_{j=1,j \neq i}^{n}
\frac{x-x_{j}}{x_{i}-x_{j}} $
The defining property of $L_{i}(x)$ is that it equals $0$ at every constraint point $x_j$ EXCEPT $x_i$, where it equals $1$.
$\huge \begin{align}
L_{i}(x_{i}) &= 1 \\
L_{i}(x_{j}) &= 0 \,\,\pa{j\neq i} \\
\end{align}
$
This gives us $n$ [[Vector|Vectors]] in this [[Vector Space]] $L_{1},L_{2},\dots,L_{n}$.
We can construct the final interpolated [[Polynomial]] with these [[Vector|Vectors]] as a [[Basis]] through this [[Linear Combination]].
$\huge P(x) = \sum_{i=1}^{n} y_{i}L_{i}(x) $
This works as interpolation because if we give $P(x)$ a value $x_i$, each $L_j(x_{i})=0$ except for $L_{i}(x_{i})=1$. Because of this, we scale each element in the summation by $y_i$, therefore:
$\huge P(x_{i}) = y_{i} $
>[!note]
>Note that this interpolation method suffers when viewing the result of $P(x)$ beyong the maximum and minimm of $x_i$ due to [[Runge's Phenomenon]].
### [[Numerical Analysis Error Bounds|Error Bounds]]
Let $f\in \mathcal C^{n}[a,b]$ ($f$ is $n$-th [[Derivative|Differentiable]] over $[a,b]$) be our original function.
Given the [[Truncated Taylor Series]] of our original function $f$:
Let $[a,b]$ be an [[Interval]] with $p\in [a,b]$ [[Universal Quantifier|for any]] $x\in[a,b]$.
$\huge \begin{align}
f(x) = f(p) + f'(p)(x-p) + \frac{1}{2}f''(\xi)(x-p)^{2}
\end{align}$
For some $\xi\in[a,b]$
Let our constraint points $x_{1},x_{2},\dots,x_{n}\in[a,b]$.
Let $P(x)$ be the interpolating $(n-1)$-th degree [[Polynomial]].
$\huge \forall x_{i} ; P(x_{i})=f(x_{i}) $
[[Universal Quantifier|For any]] $x\in[a,b]$:
$\huge
f(x) = P(x) + \frac{1}{n!} \mathcal D^{n}\set{f}(\xi)
\prod_{i=1}^{n}(x-x_{i})
$
For some $\xi\in[a,b]$.
Note that this shows our polynomial has $0$ error for the points $x_1,x_2,\dots,x_n$, as the product on the right will lead to a multiplication by $0$.
The [[Numerical Analysis Error Bounds|Error Bound]] is its own function given at some point $x$ and can be rearranged to:
$\huge
\epsilon\set{P,F}(x) = \frac{1}{n!} \deriv{^{n}f}{x^{n}}(\xi)
\prod_{i=1}^{n}(x-x_{i})
$