mscroggs.co.uk
mscroggs.co.uk

subscribe

Blog

Gaussian elimination

 2020-01-24 
This is the second post in a series of posts about matrix methods.
We want to solve \(\mathbf{A}\mathbf{x}=\mathbf{b}\), where \(\mathbf{A}\) is a (known) matrix, \(\mathbf{b}\) is a (known) vector, and \(\mathbf{x}\) is an unknown vector.
This matrix system can be thought of as a way of representing simultaneous equations. For example, the following matrix problem and system of simultaneous equations are equivalent.
\begin{align*} \begin{pmatrix}2&1\\3&1\end{pmatrix}\mathbf{x}&=\begin{pmatrix}3\\4\end{pmatrix} &&\quad&& \begin{array}{r} 2x+2y=3&\\ 3x+2y=4& \end{array} \end{align*}
The simultaneous equations here would usually be solved by adding or subtracting the equations together. In this example, subtracting the first equation from the second gives \(x=1\). From there, it is not hard to find that \(y=\frac12\).
One approach to solving \(\mathbf{A}\mathbf{x}=\mathbf{b}\) is to find the inverse matrix \(\mathbf{A}^{-1}\), and use \(\mathbf{x}=\mathbf{A}^{-1}\mathbf{b}\). In this post, we use Gaussian elimination—a method that closely resembles the simultaneous equation method—to find \(\mathbf{A}^{-1}\).

Gaussian elimination

As an example, we will use Gaussian elimination to find the inverse of the matrix
$$\begin{pmatrix} 1&-2&4\\ -2&3&-2\\ -2&2&2 \end{pmatrix}.$$
First, write the matrix with an identity matrix next to it.
$$\left(\begin{array}{ccc|ccc} 1&-2&4&1&0&0\\ -2&3&-2&0&1&0\\ -2&2&2&0&0&1 \end{array}\right)$$
Our aim is then to use row operations to change the matrix on the left of the vertical line into the identity matrix, as the matrix on the right will then be the inverse. We are allowed to use two row operations: we can multiply a row by a scalar; or we can add a multiple of a row to another row. These operations closely resemble the steps used to solve simultaneous equations.
We will get the matrix to the left of the vertical line to be the identity in a systematic manner: our first aim is to get the first column to read 1, 0, 0. We already have the 1; to get the 0s, add 2 times the first row to both the second and third rows.
$$\left(\begin{array}{ccc|ccc} 1&-2&4&1&0&0\\ 0&-1&6&2&1&0\\ 0&-2&10&2&0&1 \end{array}\right)$$
Our next aim is to get the second column to read 0, 1, 0. To get the 1, we multiply the second row by -1.
$$\left(\begin{array}{ccc|ccc} 1&-2&4&1&0&0\\ 0&1&-6&-2&-1&0\\ 0&-2&10&2&0&1 \end{array}\right)$$
To get the 0s, we add 2 times the second row to both the first and third rows.
$$\left(\begin{array}{ccc|ccc} 1&0&-8&-3&-2&0\\ 0&1&-6&-2&-1&0\\ 0&0&-2&-2&-2&1 \end{array}\right)$$
Our final aim is to get the third column to read 0, 0, 1. To get the 1, we multiply the third row by -½.
$$\left(\begin{array}{ccc|ccc} 1&0&-8&-3&-2&0\\ 0&1&-6&-2&-1&0\\ 0&0&1&1&1&-\tfrac{1}{2} \end{array}\right)$$
To get the 0s, we add 8 and 6 times the third row to the first and second rows (respectively).
$$\left(\begin{array}{ccc|ccc} 1&0&0&5&6&-4\\ 0&1&0&4&5&-3\\ 0&0&1&1&1&-\tfrac{1}{2} \end{array}\right)$$
We have the identity on the left of the vertical bar, so we can conclude that
$$\begin{pmatrix} 1&-2&4\\ -2&3&-2\\ -2&2&2 \end{pmatrix}^{-1} = \begin{pmatrix} 5&6&-4\\ 4&5&-3\\ 1&1&-\tfrac{1}{2} \end{pmatrix}.$$

How many operations

This method can be used on matrices of any size. We can imagine doing this with an \(n\times n\) matrix and look at how many operations the method will require, as this will give us an idea of how long this method would take for very large matrices. Here, we count each use of \(+\), \(-\), \(\times\) and \(\div\) as a (floating point) operation (often called a flop).
Let's think about what needs to be done to get the \(i\)th column of the matrix equal to 0, ..., 0, 1, 0, ..., 0.
First, we need to divide everything in the \(i\)th row by the value in the \(i\)th row and \(i\)th column. The first \(i-1\) entries in the column will already be 0s though, so there is no need to divide these. This leaves \(n-(i-1)\) entries that need to be divided, so this step takes \(n-(i-1)\), or \(n+1-i\) operations.
Next, for each other row (let's call this the \(j\)th row), we add or subtract a multiple of the \(i\)th row from the \(j\)th row. (Again the first \(i-1\) entries can be ignored as they are 0.) Multiplying the \(i\)th row takes \(n+1-i\) operations, then adding/subtracting takes another \(n+1-i\) operations. This needs to be done for \(n-1\) rows, so takes a total of \(2(n-1)(n+1-i)\) operations.
After these two steps, we have finished with the \(i\)th column, in a total of \((2n-1)(n+1-i)\) operations.
We have to do this for each \(i\) from 1 to \(n\), so the total number of operations to complete Gaussian elimination is
$$ (2n-1)(n+1-1) + (2n-1)(n+1-2) +...+ (2n-1)(n+1-n) $$
This simplifies to $$\tfrac12n(2n-1)(n+1)$$ or $$n^3+\tfrac12n^2-\tfrac12n.$$
The highest power of \(n\) is \(n^3\), so we say that this algorithm is an order \(n^3\) algorithm, often written \(\mathcal{O}(n^3)\). We focus on the highest power of \(n\) as if \(n\) is very large, \(n^3\) will be by far the largest number in the expression, so gives us an idea of how fast/slow this algorithm will be for large matrices.
\(n^3\) is not a bad start—it's far better than \(n^4\), \(n^5\), or \(2^n\)—but there are methods out there that will take less than \(n^3\) operations. We'll see some of these later in this series.
Previous post in series
Matrix multiplication
This is the second post in a series of posts about matrix methods.
Next post in series
Inverting a matrix

Similar posts

Inverting a matrix
Matrix multiplication
Happy √3ϕ+π-e Approximation Day!
A surprising fact about quadrilaterals

Comments

Comments in green were written by me. Comments in blue were not written by me.
 Add a Comment 


I will only use your email address to reply to your comment (if a reply is needed).

Allowed HTML tags: <br> <a> <small> <b> <i> <s> <sup> <sub> <u> <spoiler> <ul> <ol> <li>
To prove you are not a spam bot, please type "emirp" backwards in the box below (case sensitive):

Archive

Show me a random blog post
 2020 

Jul 2020

Happy √3ϕ+π-e Approximation Day!

May 2020

A surprising fact about quadrilaterals
Interesting tautologies

Mar 2020

Log-scaled axes

Feb 2020

PhD thesis, chapter ∞
PhD thesis, chapter 5
PhD thesis, chapter 4
PhD thesis, chapter 3
Inverting a matrix
PhD thesis, chapter 2

Jan 2020

PhD thesis, chapter 1
Gaussian elimination
Matrix multiplication
Christmas (2019) is over
 2019 
▼ show ▼
 2018 
▼ show ▼
 2017 
▼ show ▼
 2016 
▼ show ▼
 2015 
▼ show ▼
 2014 
▼ show ▼
 2013 
▼ show ▼
 2012 
▼ show ▼

Tags

bempp folding tube maps geometry golden ratio arithmetic speed data map projections sound propositional calculus realhats statistics dates computational complexity the aperiodical dragon curves hats latex matt parker estimation javascript manchester final fantasy polynomials numerical analysis logic plastic ratio sorting game of life christmas frobel mathsjam matrices news geogebra noughts and crosses manchester science festival bodmas triangles world cup ucl chalkdust magazine data visualisation rhombicuboctahedron oeis approximation draughts chebyshev pac-man raspberry pi pi london underground error bars game show probability games talking maths in public weather station php inline code misleading statistics boundary element methods probability trigonometry rugby christmas card phd mathslogicbot big internet math-off electromagnetic field curvature machine learning folding paper countdown craft ternary palindromes fractals hexapawn finite element method european cup pizza cutting tennis matrix of cofactors light mathsteroids national lottery matrix multiplication nine men's morris python dataset royal institution go graphs twitter braiding sobolev spaces bubble bobble puzzles interpolation menace binary stickers royal baby martin gardner wave scattering captain scarlet matrix of minors cross stitch flexagons logs advent calendar tmip coins accuracy golden spiral reuleaux polygons signorini conditions exponential growth graph theory books gaussian elimination preconditioning convergence platonic solids a gamut of games pi approximation day programming wool harriss spiral cambridge video games people maths squares reddit determinants chess gerry anderson pythagoras asteroids football inverse matrices hannah fry london quadrilaterals sport weak imposition simultaneous equations radio 4

Archive

Show me a random blog post
▼ show ▼
© Matthew Scroggs 2012–2020