The [[Jacobi Method]] is a type of [[Iterative Method]] to find the solution to a [[System of Linear Equations]]. For a given [[System of Linear Equations]] represented in matrix form, $ \huge A\vec{x} =\vec{ b} $ Where $A$ is an $n\times n$ [[Matrix]]. Let us notate the value of the $n$-th iteration as $\mathbf{\vec x}^{(n)}$. We start with an arbitrary initial iteration $\mathbf{\vec x}^{(1)}$. Each following iteration $k+1$ is given by the following: $ \huge \begin{align} \mathbf{x}_{i}^{(k+1)} &= \frac{1}{a_{ii}} \pa{ b_{i} - \sum_{j \neq 1}a_{ij}x_{j}^{(k)} } \end{align} $ Where $i=0,1,\dots, n$. Each next iteration $\mathbf{\vec x}^{(k+1)}$ is solved as an [[Affine Transformations|Affine Transformation]] of the previous, $\mathbf{\vec x}^{(k)}$. This means it can be rearranges as the following: $ \huge \begin{align} \mathbf{\vec{ x} }^{(k+1)} &= T \mathbf{\vec{x}}^{(k)} + \vec{c} \\ T_{ij} & = \begin{cases} - \frac{a_{ij}}{a_{ii}} \text{ if } i \neq j \\ 0 \end{cases} \\ c_{i} &= \frac{b_{i}}{a_{ii}} \end{align} $ ### Matrix Form In matrix form we can instead represent $T$ and $\vec c$ as the following, $ \huge \begin{align} T &= -D^{-1} (L+U) \\ \vec c &= D^{-1} \vec b \end{align} $ Where $D$ is only the [[Diagonal Matrix|diagonals]] of $A$, $L$ is the [[Lower Triangular Matrix]] of $A$, and $U$ is the [[Upper Triangular Matrix]] of $A$. $\huge \begin{align} L \mathbf{\vec x}^{(k)} + D\mathbf{\vec x}^{(k+1)} + U\mathbf{\vec x}^{(k)} &= \vec b \\ \end{align} $ $ \huge \begin{align} D \mathbf{\vec x}^{(k+1)} &= -L \mathbf{\vec x}^{(k)} - U \mathbf{\vec x}^{(k)} + \vec b \\ \mathbf{\vec x}^{(k+1)} &= - D^{-1}(L+U) \mathbf{\vec x}^{(k)} + D^{-1} \vec b \\ \end{align} $ ## Convergence For the solution $\vec x$, let $\vec y^{(k)}= \mathbf{\vec x^{(k)}}-\vec x$. This would make $\vec y^{(k)}$ similar to the [[Numerical Analysis Error Bounds|Error]] of our method at iteration $k$. The actual error will be the [[Norm]] of $\vec y^{(k)}$. $\huge \epsilon^{(k)} = \norm{ \vec y^{(k)} } $ If this method convergences, then the limit of $\vec{y}$ will converge to the [[Zero Vector]]. $ \huge \lim_{ k \to \infty } \vec y^{(k)} = \vec 0 \implies \lim_{ k \to \infty } {\epsilon}^{(k)} = 0 $ $ \huge \begin{align} \vec y^{(k+1)} &= T \vec{y} ^{(k)} \\ \vec y^{(k)} &= T \vec y^{(k-1)} = T^{2}\vec y^{(k-2)} = \cdots = T^{k} \vec y^{(0)} \end{align} $ $\huge \begin{align} \epsilon^{(k)} = \norm{ T^{k}\vec y^{(0)} } &\leq \norm{ T^{k} } \norm{ \vec y^{(0)} } \\ & \leq \norm{ T } ^{k} \norm{ \mathbf{\vec x}^{(0)} - \vec x } \\ & \leq \norm{ T } ^{k} \epsilon^{(0)} \end{align} $ This means that if $||T||<1$, this method must [[Convergent Series|converge]]. Note that this is **not** an [[Biconditional|if and only if]]. If $A$ is [[Diagonally Dominant]], then $||T|| < 1$. ## [[Numerical Analysis Error Bounds|Error Bound]] If $\norm{ T }<1$, then $\norm{ (I-T)^{-1} } \leq \frac{1}{1-||T||}$. Another required fact is that the norm of the [[Inverse Matrices|inverse]] of a matrix is greater than or equal to 1 over the norm of the matrix, $||A^{-1}|| \geq \frac{1}{||A||}$. $ \large \begin{align} (I-T)(I-T)^{{-1}} &= I \\ I(I-T)^{-1} - T(I-T)^{-1} &= I \\ (I-T)^{-1} &= I + T(I-T)^{-1} \\ \norm{ (I-T)^{-1} } &= \norm{ I + T(I-T)^{-1} } \\ &\leq \norm{ I } + \norm{ T(I-T)^{-1} } \\ &\leq 1 + \norm{ T } \norm{ (I-T)^{-1} } \\ (1-\norm{ T } ) \norm{ (I-T)^{-1} } & \leq 1 \end{align} $ $ \large \begin{align} \Delta \vec x ^{(k-1)} &= \vec x^{(k)} - \vec x^{(k+1)} \\ &=(\vec x^{(k)}-\vec x^{(k)}) - (\vec x^{(k-1)}-\vec x) \\ &=\vec y^{(k)} - \vec y^{(k-1)} \\ &= I\vec y^{(k)} - T^{-1} \vec y^{(k-1)} \\ &= (I- T^{-1}) \vec y^{(k)} \\ &= T^{-1}(T-I)\vec y^{(k)} \\ (T-I)^{-1}T \Delta \vec x^{(k-1)} &= \vec y^{(k)} \\ \norm{ (T-I)^{-1} T \Delta \vec x^{(k-1)} } &= \norm{ \vec y^{(k)} } \\ \norm{ (T-I)^{-1} } \norm{ T } \norm{ \Delta \vec x^{(k-1)} } &\geq \epsilon^{(k)} \end{align} $ Which leaves us with our [[Numerical Analysis Error Bounds|Error Bound]], $ \huge \epsilon^{(k)} \leq \frac{\norm{ T } }{1- \norm{ T } } \norm{ \Delta \vec x^{(k-1)} } $ ### Diagonalizability Given a matrix $T$ and a vector $\vec x$ written as a basis of the [[Eigenvector|Eigenvectors]] of $T$. $ \huge \begin{align} \vec y &= \sum_{i} c_{i} \vec \lambda_{i} \\ T^{k} \vec y &= \sum_{i} c_{i} \lambda_{i}^{k} \vec \lambda _{i} \\ &= \lambda_{1}^{k} \pa{ \\ c_{1} \vec \lambda_{1} + \frac{\lambda_{2}^{k}}{\lambda_{1}^{k}} c_{2} \vec \lambda_{2} + \cdots + \frac{\lambda_{n}^{k}}{\lambda_{1}^{k}} c_{n} \vec \lambda_{n} } \end{align} $ If $|\lambda_{2}| < |\lambda_{1}|$, then $\lim_{ k \to \infty }{ \frac{\lambda_{2}^{k}}{\lambda_{1}^{k}} }=0$. $\huge \begin{align} T^{k} &\approx \lambda_{1}^{k}\pa{ c_{1} \vec \lambda_{1} + \pa{\frac{\lambda_{2}}{\lambda_{1}}}^{k} c_{2} \vec \lambda_{2} } \\ \epsilon^{(k)} &= \norm{ T^{k} \vec y } \approx \norm{ \lambda_{1}^{k} c_{1} \vec \lambda_{1} } = |\lambda_{1} |^{k} \norm{ c_{1} \vec \lambda_{1} } \end{align} $