mscroggs.co.uk
mscroggs.co.uk

subscribe

Blog

 2020-01-31 
This is the first post in a series of posts about my PhD thesis.
Yesterday, I handed in the final version of my PhD thesis. This is the first in a series of blog posts in which I will attempt to explain what my thesis says with minimal mathematical terminology and notation.
The aim of these posts is to give a more general audience some idea of what my research is about. If you are looking to build on my work, I recommend that you read my thesis, or my papers based on parts of it for the full mathematical detail. You may find these posts helpful with developing a more intuitive understanding of some of the results in these.
In this post, we start at the beginning and look at what is contained in chapter 1 of my thesis.

Introduction

In general, my work looks at using discretisation to approximately solve partial differential equations (PDEs). We primarily focus on three PDEs:

If \(u\) represents the temperature in a region, the region contains no heat sources, and the heat distribution in the region is not changing, then \(u\) a solution of Laplace's equation. Because of this application, Laplace's equation is sometimes called the steady-state heat equation.

The Helmholtz equation models acoustic waves travelling through a medium. The unknown \(u\) represents the amplitude of the wave at each point. The equation features the wavenumber \(k\), which gives the number of waves per unit distance.

Maxwell's equations describe the behaviour of electromagnetic waves. The unknown \(e\) in Maxwell's equatons is an unknown vector, whereas the unknown \(u\) in Laplace and Helmholtz is a scalar. This adds some difficulty when working with Maxwell's equations.

Discretisation

In many situations, no method for finding the exact solution to these equations is known. It is therefore very important to be able to get accurate approximations of the solutions of these equations. My PhD thesis focusses on how these can be approximately solved using discretisation.
Each of the PDEs that we want to solve describes a function that has a value at every point in a 3D space: there are an (uncountably) infinite number of points, so these equations effectively have an infinite number of unknown values that we want to know. The aim of discretisation is to reduce the number of unknowns to a finite number in such a way that solving the finite problem gives a good approximation of the true solution. The finite problem can then be solved using your favourite matrix method.
For example, we could split a 2D shape into lots of small triangles, or split a 3D shape into lots of small tetrahedra. We could then try to approximate our solution with a constant inside each triangle or tetrahedron. Or we could approximate by taking a value at each vertex of the triangles or tetrahedra, and a linear function inside shape. Or we could use quadratic functions or higher polynomials to approximate the solution. This is the approach taken by a method called the finite element method.
My thesis, however, focusses on the boundary element method, a method that is closely related to the finite element method. Imagine that we want to solve a PDE problem inside a sphere. For some PDEs (such as the three above), it is possible to represent the solution in terms of some functions on the surface of the sphere. We can then approximate the solution to the problem by approximating these functions on the surface: we then only have to discretise the surface and not the whole of the inside of the sphere. This method will also work for other 3D shapes, and isn't limited to spheres.
One big advantage of this method is that it can also be used if we want to solve a problem in the (infinte) region outside an object, as even if the region is infinite, the surface of the object will be finite. The finite element method has difficulties here, as splitting the infinite region in tetrahedra would require an infinite number of tetrahedra, which destroys the point of discretisation (although this kind of problem can be solved using finite elements if you split into tetrahedra far enough outwards, then use some tricks where your tetrahedron stop). The finite element method, on the other hand, works for a much wider range of PDEs, so in many situations it would be the only of these two methods that can be used.
We focus on the boundary element method as when it can be used, it can be a very powerful method.

Sobolev spaces

Much of the first chapter of my thesis looks at the definitions of Sobolev function spaces: these spaces describe the types of functions that we will look for to solve our PDEs. It is important to frame the problems using Sobolev spaces, as properties of these spaces are used heavily when proving results about our approximations.
The definitions of these spaces are too technical to include here, but the spaces can be thought of as a way to require certain things of the solutions to our problems. We start be demanding that the function be square integrable: they can be squared the integrated to give a finite answer. Other spaces are then defined by demanding that a function can be differentiated to give a square integrable derivative; or it can be differentiated twice or more times.

Boundary element method

We now look at how the boundary element method works in more detail. The "boundary" in the name refers to the surface of the object or region that we are solving problems in.
In order to be able to use the boundary element method, we need to know the Green's function of the PDE in question. (This is what limits the number of problems that the boundary element method can be used to solve.) The Green's function is the solution as every point in 3D (or 2D if you prefer) except for one point, where it has an infinite value, but a very special type of infinite value. The Green's function gives the field due to a unit point source at the chosen point.
If a region contains no sources, then the solution inside that region can be represented by adding up point sources at every point on the boundary (or in other words, integrating the Green's function times a weighting function, as the limit of adding up these at lots of points is an integral). The solution could instead be represented by doing the same thing with the derivative of the Green's function.
We define two potential operators—the single layer potential \(\mathcal{V}\) and the double layer potential \(\mathcal{K}\)—to be the two operators representing the two boundary integrals: these operators can be applied to weight functions on the boundary to give solutions to the PDE in the region. Using these operators, we write a representation formula that tells us how to compute the solution inside the region once we know the weight functions on the boundary.
By looking at the values of these potential operators on the surface, and derivative of these values, (and integrating these over the boundary) we can define four boundary operators: the single layer operator \(\mathsf{V}\), the double layer operator \(\mathsf{K}\), the adjoint double layer operator \(\mathsf{K}'\), the hypersingular operator \(\mathsf{W}\). We use these operators to write the boundary integral equation that we derive from the representation formula. It is this boundary integral equation that we will apply discretisation to to find an approximation of the weight functions; we can then use the representation formula to find the solution to the PDE.

The Calderón projector

Using the four boundary operators, we can define the Calderón projector, \(\mathsf{C}\):
$$\mathsf{C}=\begin{bmatrix} \tfrac12-\mathsf{K}&\mathsf{V}\\ \mathsf{W}&\tfrac12+\mathsf{K}' \end{bmatrix}$$
This is the Calderón projector for problems in a finite interior region. For exterior problems, the exterior Cadleró is used. This is given by
$$\mathsf{C}=\begin{bmatrix} \tfrac12+\mathsf{K}&-\mathsf{V}\\ -\mathsf{W}&\tfrac12-\mathsf{K}' \end{bmatrix}.$$
We treat the Calderón operator like a matrix, even though its entries are operators not numbers. The Calderón projector has some very useful properties that are commonly used when using the boundary element method. These include:

These properties conclude chapter 1 of my thesis. Why not take a break and fill the following diagram with hot liquid before reading on.
The results of applying the Calderón projector to two functions. By the properties above, these are (the boundary traces of) a solution of the PDE.
This is the first post in a series of posts about my PhD thesis.
Next post in series
                        
(Click on one of these icons to react to this blog post)

You might also enjoy...

Comments

Comments in green were written by me. Comments in blue were not written by me.
 Add a Comment 


I will only use your email address to reply to your comment (if a reply is needed).

Allowed HTML tags: <br> <a> <small> <b> <i> <s> <sup> <sub> <u> <spoiler> <ul> <ol> <li> <logo>
To prove you are not a spam bot, please type "q" then "u" then "o" then "t" then "i" then "e" then "n" then "t" in the box below (case sensitive):
 2020-01-24 
This is the second post in a series of posts about matrix methods.
We want to solve \(\mathbf{A}\mathbf{x}=\mathbf{b}\), where \(\mathbf{A}\) is a (known) matrix, \(\mathbf{b}\) is a (known) vector, and \(\mathbf{x}\) is an unknown vector.
This matrix system can be thought of as a way of representing simultaneous equations. For example, the following matrix problem and system of simultaneous equations are equivalent.
\begin{align*} \begin{pmatrix}2&1\\3&1\end{pmatrix}\mathbf{x}&=\begin{pmatrix}3\\4\end{pmatrix} &&\quad&& \begin{array}{r} 2x+2y=3&\\ 3x+2y=4& \end{array} \end{align*}
The simultaneous equations here would usually be solved by adding or subtracting the equations together. In this example, subtracting the first equation from the second gives \(x=1\). From there, it is not hard to find that \(y=\frac12\).
One approach to solving \(\mathbf{A}\mathbf{x}=\mathbf{b}\) is to find the inverse matrix \(\mathbf{A}^{-1}\), and use \(\mathbf{x}=\mathbf{A}^{-1}\mathbf{b}\). In this post, we use Gaussian elimination—a method that closely resembles the simultaneous equation method—to find \(\mathbf{A}^{-1}\).

Gaussian elimination

As an example, we will use Gaussian elimination to find the inverse of the matrix
$$\begin{pmatrix} 1&-2&4\\ -2&3&-2\\ -2&2&2 \end{pmatrix}.$$
First, write the matrix with an identity matrix next to it.
$$\left(\begin{array}{ccc|ccc} 1&-2&4&1&0&0\\ -2&3&-2&0&1&0\\ -2&2&2&0&0&1 \end{array}\right)$$
Our aim is then to use row operations to change the matrix on the left of the vertical line into the identity matrix, as the matrix on the right will then be the inverse. We are allowed to use two row operations: we can multiply a row by a scalar; or we can add a multiple of a row to another row. These operations closely resemble the steps used to solve simultaneous equations.
We will get the matrix to the left of the vertical line to be the identity in a systematic manner: our first aim is to get the first column to read 1, 0, 0. We already have the 1; to get the 0s, add 2 times the first row to both the second and third rows.
$$\left(\begin{array}{ccc|ccc} 1&-2&4&1&0&0\\ 0&-1&6&2&1&0\\ 0&-2&10&2&0&1 \end{array}\right)$$
Our next aim is to get the second column to read 0, 1, 0. To get the 1, we multiply the second row by -1.
$$\left(\begin{array}{ccc|ccc} 1&-2&4&1&0&0\\ 0&1&-6&-2&-1&0\\ 0&-2&10&2&0&1 \end{array}\right)$$
To get the 0s, we add 2 times the second row to both the first and third rows.
$$\left(\begin{array}{ccc|ccc} 1&0&-8&-3&-2&0\\ 0&1&-6&-2&-1&0\\ 0&0&-2&-2&-2&1 \end{array}\right)$$
Our final aim is to get the third column to read 0, 0, 1. To get the 1, we multiply the third row by -½.
$$\left(\begin{array}{ccc|ccc} 1&0&-8&-3&-2&0\\ 0&1&-6&-2&-1&0\\ 0&0&1&1&1&-\tfrac{1}{2} \end{array}\right)$$
To get the 0s, we add 8 and 6 times the third row to the first and second rows (respectively).
$$\left(\begin{array}{ccc|ccc} 1&0&0&5&6&-4\\ 0&1&0&4&5&-3\\ 0&0&1&1&1&-\tfrac{1}{2} \end{array}\right)$$
We have the identity on the left of the vertical bar, so we can conclude that
$$\begin{pmatrix} 1&-2&4\\ -2&3&-2\\ -2&2&2 \end{pmatrix}^{-1} = \begin{pmatrix} 5&6&-4\\ 4&5&-3\\ 1&1&-\tfrac{1}{2} \end{pmatrix}.$$

How many operations

This method can be used on matrices of any size. We can imagine doing this with an \(n\times n\) matrix and look at how many operations the method will require, as this will give us an idea of how long this method would take for very large matrices. Here, we count each use of \(+\), \(-\), \(\times\) and \(\div\) as a (floating point) operation (often called a flop).
Let's think about what needs to be done to get the \(i\)th column of the matrix equal to 0, ..., 0, 1, 0, ..., 0.
First, we need to divide everything in the \(i\)th row by the value in the \(i\)th row and \(i\)th column. The first \(i-1\) entries in the column will already be 0s though, so there is no need to divide these. This leaves \(n-(i-1)\) entries that need to be divided, so this step takes \(n-(i-1)\), or \(n+1-i\) operations.
Next, for each other row (let's call this the \(j\)th row), we add or subtract a multiple of the \(i\)th row from the \(j\)th row. (Again the first \(i-1\) entries can be ignored as they are 0.) Multiplying the \(i\)th row takes \(n+1-i\) operations, then adding/subtracting takes another \(n+1-i\) operations. This needs to be done for \(n-1\) rows, so takes a total of \(2(n-1)(n+1-i)\) operations.
After these two steps, we have finished with the \(i\)th column, in a total of \((2n-1)(n+1-i)\) operations.
We have to do this for each \(i\) from 1 to \(n\), so the total number of operations to complete Gaussian elimination is
$$ (2n-1)(n+1-1) + (2n-1)(n+1-2) +...+ (2n-1)(n+1-n) $$
This simplifies to $$\tfrac12n(2n-1)(n+1)$$ or $$n^3+\tfrac12n^2-\tfrac12n.$$
The highest power of \(n\) is \(n^3\), so we say that this algorithm is an order \(n^3\) algorithm, often written \(\mathcal{O}(n^3)\). We focus on the highest power of \(n\) as if \(n\) is very large, \(n^3\) will be by far the largest number in the expression, so gives us an idea of how fast/slow this algorithm will be for large matrices.
\(n^3\) is not a bad start—it's far better than \(n^4\), \(n^5\), or \(2^n\)—but there are methods out there that will take less than \(n^3\) operations. We'll see some of these later in this series.
Previous post in series
This is the second post in a series of posts about matrix methods.
Next post in series
                        
(Click on one of these icons to react to this blog post)

You might also enjoy...

Comments

Comments in green were written by me. Comments in blue were not written by me.
 Add a Comment 


I will only use your email address to reply to your comment (if a reply is needed).

Allowed HTML tags: <br> <a> <small> <b> <i> <s> <sup> <sub> <u> <spoiler> <ul> <ol> <li> <logo>
To prove you are not a spam bot, please type "c" then "o" then "s" then "i" then "n" then "e" in the box below (case sensitive):
 2020-01-23 
This is the first post in a series of posts about matrix methods.
When you first learn about matrices, you learn that in order to multiply two matrices, you use this strange-looking method involving the rows of the left matrix and the columns of this right.
It doesn't immediately seem clear why this should be the way to multiply matrices. In this blog post, we look at why this is the definition of matrix multiplication.

Simultaneous equations

Matrices can be thought of as representing a system of simultaneous equations. For example, solving the matrix problem
$$ \begin{bmatrix}2&5&2\\1&0&-2\\3&1&1\end{bmatrix} \begin{pmatrix}x\\y\\z\end{pmatrix} = \begin{pmatrix}14\\-16\\-4\end{pmatrix} $$
is equivalent to solving the following simultaneous equations.
\begin{align*} 2x+5y+2z&=14\\ 1x+0y-2z&=-16\\ 3x+1y+1z&=-4 \end{align*}

Two matrices

Now, let \(\mathbf{A}\) and \(\mathbf{C}\) be two 3×3 matrices, let \(\mathbf{b}\) by a vector with three elements, and let \(\mathbf{x}=(x,y,z)\). We consider the equation
$$\mathbf{A}\mathbf{C}\mathbf{x}=\mathbf{b}.$$
In order to understand what this equation means, we let \(\mathbf{y}=\mathbf{C}\mathbf{x}\) and think about solving the two simuntaneous matrix equations,
\begin{align*} \mathbf{A}\mathbf{y}&=\mathbf{b}\\ \mathbf{C}\mathbf{x}&=\mathbf{y}. \end{align*}
We can write the entries of \(\mathbf{A}\), \(\mathbf{C}\), \(\mathbf{x}\), \(\mathbf{y}\) and \(\mathbf{b}\) as
\begin{align*} \mathbf{A}&=\begin{bmatrix} a_{11}&a_{12}&a_{13}\\ a_{21}&a_{22}&a_{23}\\ a_{31}&a_{32}&a_{23} \end{bmatrix} & \mathbf{C}&=\begin{bmatrix} c_{11}&c_{12}&c_{13}\\ c_{21}&c_{22}&c_{23}\\ c_{31}&c_{32}&c_{23} \end{bmatrix} \end{align*} \begin{align*} \mathbf{x}&=\begin{pmatrix}x_1\\x_2\\x_3\end{pmatrix} & \mathbf{y}&=\begin{pmatrix}y_1\\y_2\\y_3\end{pmatrix} & \mathbf{b}&=\begin{pmatrix}b_1\\b_2\\b_3\end{pmatrix} \end{align*}
We can then write out the simultaneous equations that \(\mathbf{A}\mathbf{y}=\mathbf{b}\) and \(\mathbf{C}\mathbf{x}=\mathbf{y}\) represent:
\begin{align} a_{11}y_1+a_{12}y_2+a_{13}y_3&=b_1& c_{11}x_1+c_{12}x_2+c_{13}x_3&=y_1\\ a_{21}y_1+a_{22}y_2+a_{23}y_3&=b_2& c_{21}x_1+c_{22}x_2+c_{23}x_3&=y_2\\ a_{31}y_1+a_{32}y_2+a_{33}y_3&=b_3& c_{31}x_1+c_{32}x_2+c_{33}x_3&=y_3\\ \end{align}
Substituting the equations on the right into those on the left gives:
\begin{align} a_{11}(c_{11}x_1+c_{12}x_2+c_{13}x_3)+a_{12}(c_{21}x_1+c_{22}x_2+c_{23}x_3)+a_{13}(c_{31}x_1+c_{32}x_2+c_{33}x_3)&=b_1\\ a_{21}(c_{11}x_1+c_{12}x_2+c_{13}x_3)+a_{22}(c_{21}x_1+c_{22}x_2+c_{23}x_3)+a_{23}(c_{31}x_1+c_{32}x_2+c_{33}x_3)&=b_2\\ a_{31}(c_{11}x_1+c_{12}x_2+c_{13}x_3)+a_{32}(c_{21}x_1+c_{22}x_2+c_{23}x_3)+a_{33}(c_{31}x_1+c_{32}x_2+c_{33}x_3)&=b_3\\ \end{align}
Gathering the terms containing \(x_1\), \(x_2\) and \(x_3\) leads to:
\begin{align} (a_{11}c_{11}+a_{12}c_{21}+a_{13}c_{31})x_1 +(a_{11}c_{12}+a_{12}c_{22}+a_{13}c_{32})x_2 +(a_{11}c_{13}+a_{12}c_{23}+a_{13}c_{33})x_3&=b_1\\ (a_{21}c_{11}+a_{22}c_{21}+a_{23}c_{31})x_1 +(a_{21}c_{12}+a_{22}c_{22}+a_{23}c_{32})x_2 +(a_{21}c_{13}+a_{22}c_{23}+a_{23}c_{33})x_3&=b_2\\ (a_{31}c_{11}+a_{32}c_{21}+a_{33}c_{31})x_1 +(a_{31}c_{12}+a_{32}c_{22}+a_{33}c_{32})x_2 +(a_{31}c_{13}+a_{32}c_{23}+a_{33}c_{33})x_3&=b_3 \end{align}
We can write this as a matrix:
$$ \begin{bmatrix} a_{11}c_{11}+a_{12}c_{21}+a_{13}c_{31}& a_{11}c_{12}+a_{12}c_{22}+a_{13}c_{32}& a_{11}c_{13}+a_{12}c_{23}+a_{13}c_{33}\\ a_{21}c_{11}+a_{22}c_{21}+a_{23}c_{31}& a_{21}c_{12}+a_{22}c_{22}+a_{23}c_{32}& a_{21}c_{13}+a_{22}c_{23}+a_{23}c_{33}\\ a_{31}c_{11}+a_{32}c_{21}+a_{33}c_{31}& a_{31}c_{12}+a_{32}c_{22}+a_{33}c_{32}& a_{31}c_{13}+a_{32}c_{23}+a_{33}c_{33} \end{bmatrix} \mathbf{x}=\mathbf{b} $$
This equation is equivalent to \(\mathbf{A}\mathbf{C}\mathbf{x}=\mathbf{b}\), so the matrix above is equal to \(\mathbf{A}\mathbf{C}\). But this matrix is what you get if follow the row-and-column matrix multiplication method, and so we can see why this definition makes sense.
This is the first post in a series of posts about matrix methods.
Next post in series
                        
(Click on one of these icons to react to this blog post)

You might also enjoy...

Comments

Comments in green were written by me. Comments in blue were not written by me.
 Add a Comment 


I will only use your email address to reply to your comment (if a reply is needed).

Allowed HTML tags: <br> <a> <small> <b> <i> <s> <sup> <sub> <u> <spoiler> <ul> <ol> <li> <logo>
To prove you are not a spam bot, please type "prime" in the box below (case sensitive):
 2020-01-02 
It's 2020, and the Advent calendar has disappeared, so it's time to reveal the answers and annouce the winners. But first, some good news: with your help, Santa and his two reindeer were found and Christmas was saved!
Now that the competition is over, the questions and all the answers can be found here. Before announcing the winners, I'm going to go through some of my favourite puzzles from the calendar, reveal the solution and a couple of notes and Easter eggs.

Highlights

My first highlight is this puzzle from 10 December that seems difficult to get started on, but plotting the two quadratics reveals a very useful piece of information that can be used.

10 December

For all values of \(x\), the function \(f(x)=ax+b\) satisfies
$$8x-8-x^2\leqslant f(x)\leqslant x^2.$$
What is \(f(65)\)?
Edit: The left-hand quadratic originally said \(8-8x-x^2\). This was a typo and has now been corrected.

Show answer


My next highlight is the puzzle from 12 December, which was election day in the UK. Although the puzzle isn't that difficult or interesting to calculate, the answer is surprising enough to make this one of my favourites.

12 December

For a general election, the Advent isles are split into 650 constituencies. In each constituency, exactly 99 people vote: everyone votes for one of the two main parties: the Rum party or the Land party. The party that receives the most votes in each constituency gets an MAP (Member of Advent Parliament) elected to parliament to represent that constituency.
In this year's election, exactly half of the 64350 total voters voted for the Rum party. What is the largest number of MAPs that the Rum party could have?

Show answer


My next highlight is the puzzle from 13 December. If you enjoyed this one, then you can find a puzzle based on a similar idea on the puzzles pages of issue 10 of Chalkdust.

13 December

Each clue in this crossnumber (except 5A) gives a property of that answer that is true of no other answer. For example: 7A is a multiple of 13; but 1A, 3A, 5A, 1D, 2D, 4D, and 6D are all not multiples of 13. No number starts with 0.

Show answer


My final highlight is the puzzle from 16 December. I always include a few of these, as they can be designed to give any answer so are useful for making the final logic puzzle work. But I was particularly happy with this one.

16 December

Arrange the digits 1-9 in a 3×3 square so that: the median number in the first row is 6; the median number in the second row is 3; the mean of the numbers in the third row is 4; the mean of the numbers in the second column is 7; the range of the numbers in the third column is 2, The 3-digit number in the first column is today's number.
median 6
median 3
mean 4
today's numbermean 7range 2

Show answer

Notes and Easter eggs

A few of you may have noticed the relevance of the colours assigned to each island: Rum (Red Rum), Land (Greenland), Moon (Blue Moon), and County (Orange County).
Once you've entered 24 answers, the calendar checks these and tells you how many are correct. I logged the answers that were sent for checking and have looked at these to see which puzzles were the most and least commonly incorrect. The bar chart below shows the total number of incorrect attempts at each question.
You can see that the most difficult puzzles were those on 4 and 15 December; and the easiest puzzles were on 3, 11, 17, and 24 December.
This year, the final logic puzzle revealed the positions of Santa, Rudolph and Blitzen, then you had to find them on this map. The map has 6 levels, with 81 positions on each level, so the total size of the map is \(81^6=282\,429\,536\,481\) squares. This is a lot; one Advent solver even wondered how large a cross stitch of the whole thing would be. I obviously didn't draw 282 billion tiles: the whole map was generated using the following tiles, which were reused across the map.
The map tiles.
I also snuck a small Easter egg into the map. Below the Advent calendar, this example was shown:
The ASC coordinates of this pair of flowers are 12.52.12.13.84.55 (click to enlarge).
If you actually visited this position on the map, you found Wally.
Wally.
At least one Advent solver appears to have found Wally, as they left this cryptic comment under the name Dr Matrix (an excellent Martin Gardner reference).

The solution

The solutions to all the individual puzzles can be found here.
Using the daily clues, it was possible to work out that Santa was at 36.11.19.79.79.43, Blitzen was at 23.12.23.11.23.11, and Rudolph was at 16.64.16.16.16.64.
Santa (left), Blitzen (centre) and Rudolph (right).

The winners

And finally (and maybe most importantly), on to the winners: 126 people found Santa and both reindeer and entered the competition. Their (very) approximate locations are shown on this map:
From the correct answers, the following 10 winners were selected:
1Valentin Valciu
2Tom Anderson
3Alex Bolton
4Kevin Fray
5Jack Kiuttu
6Ben Baker
7Joe Gage
8Michael Thomas
9Martin Holtham
10Beth Jensen
Congratulations! Your prizes will be on their way shortly.
The prizes this year include 2019 Advent calendar T-shirts. If you didn't win one, but would like one of these, you can buy one at mscroggs.co.uk/tshirt until 7 January (when I will be sending them for printing).
The design on the T-shirt.
Additionally, well done to Adam Abrams, Adam NH, Adam Vellender, Alan Buck, Alex Ayres, Alex Burlton, Alexander Ignatenkov, Andrew Ennaco, Andrew Tindall, Artie Smith, Ashley Jarvis, Austin Antoniou, Becky Russell, Ben Jones, Ben Reiniger, Brennan Dolson, Brian Carnes, Carl Westerlund, Carmen Guenther, Clare Wallace, Colin Beveridge, Connie, Corbin Groothuis, Cory Peters, Dan Colestock, Dan DiMillo, Dan Whitman, David, David Ault, David Fox, Diane Keimel, Duncan Ramage, Emilie Heidenreich, Emily Troyer, Eric, Eric Kolbusz, Erik Eklund, Evan Louis Robinson, Frances Haigh, Franklin Ta, Fred Verheul, Félix Breton, Gabriella Pinter, Gautam Webb, Gert-Jan de Vries, Hart Andrin, Heerpal Sahota, Herschel Pecker, Jacob Juillerat, Jan, Jean-Noël Monette, Jen Shackley, Jeremiah Southwick, Jessica Marsh, Johan Asplund, John Warwicker, Jon Foster, Jon Palin, Jonathan Chaffer, Jonathan Winfield, Jose, Kai, Karen Climis, Karen Kendel, Katja Labermeyer, Laura, Lewis Dyer, Louis de Mendonca, M Oostrom, Magnus Eklund, Mahmood Hikmet, Marian Clegg, Mark Stambaugh, Martin Harris, Martine Vijn Nome, Matt Hutton, Matthew Askins, Matthew Schulz, Maximilian Pfister, Melissa Lucas, Mels, Michael DeLyser, Michael Gustin, Michael Horst, Michael Prescott, Mihai Zsisku, Mike, Mikko, Moreno Gennari, Nadine Chaurand, Naomi Bowler, Nathan C, Pat Ashforth, Paul Livesey, Pranshu Gaba, Raymond Arndorfer, Riccardo Lani, Rosie Paterson, Rupinder Matharu, Russ Collins, S A Paget, SShaikh, Sam Butler, Sam Hartburn, Scott, Seth Cohen, Shivanshi Adlakha, Simon Schneider, Stephen Cappella, Stephen Dainty, Steve Blay, Thomas Tu, Tom Anderson, Tony Mann, Yasha, and Yuliya Nesterova, who all also submitted the correct answer but were too unlucky to win prizes this time.
See you all next December, when the Advent calendar will return.
                        
(Click on one of these icons to react to this blog post)

You might also enjoy...

Comments

Comments in green were written by me. Comments in blue were not written by me.
It's interesting that the three puzzles with the most incorrect attempts can all be looked up on OEIS.

Day 15 - https://oeis.org/A001055 - "number of ways of factoring n with all factors greater than 1"

Day 4 - https://oeis.org/A001045 - "number of ways to tile a 2 × (n-1) rectangle with 1 × 2 dominoes and 2 × 2 squares"

Day 2 - https://oeis.org/A002623 - "number of nondegenerate triangles that can be made from rods of length 1,2,3,4,...,n"
Adam Abrams
   ×1              Reply
Never mind, I found them, they were your example ASC. Very clever!
Michael T
   ×1              Reply
Where did the coordinates for Wally come from? Are they meaningful, or are they just some random coordinates thrown together?
Michael T
                 Reply
 Add a Comment 


I will only use your email address to reply to your comment (if a reply is needed).

Allowed HTML tags: <br> <a> <small> <b> <i> <s> <sup> <sub> <u> <spoiler> <ul> <ol> <li> <logo>
To prove you are not a spam bot, please type "tnemges" backwards in the box below (case sensitive):
 2019-12-27 
In tonight's Royal Institution Christmas lecture, Hannah Fry and Matt Parker demonstrated how machine learning works using MENACE.
The copy of MENACE that appeared in the lecture was build and trained by me. During the training, I logged all the moved made by MENACE and the humans playing against them, and using this data I have created some visualisations of the machine's learning.
First up, here's a visualisation of the likelihood of MENACE choosing different moves as they play games. The thickness of each arrow represented the number of beads in the box corresponding to that move, so thicker arrows represent more likely moves.
The likelihood that MENACE will play each move.
There's an awful lot of arrows in this diagram, so it's clearer if we just visualise a few boxes. This animation shows how the number of beads in the first box changes over time.
The beads in the first box.
You can see that MENACE learnt that they should always play in the centre first, an ends up with a large number of green beads and almost none of the other colours. The following animations show the number of beads changing in some other boxes.
MENACE learns that the top left is a good move.
MENACE learns that the middle right is a good move.
MENACE is very likely to draw from this position so learns that almost all the possible moves are good moves.
The numbers in these change less often, as they are not used in every game: they are only used when the game reached the positions shown on the boxes.
We can visualise MENACE's learning progress by plotting how the number of beads in the first box changes over time.
The number of beads in MENACE's first box.
Alternatively, we could plot how the number of wins, loses and draws changes over time or view this as an animated bar chart.
The number of games MENACE wins, loses and draws.
The number of games MENACE has won, lost and drawn.
If you have any ideas for other interesting ways to present this data, let me know in the comments below.
                  ×1      
(Click on one of these icons to react to this blog post)

You might also enjoy...

Comments

Comments in green were written by me. Comments in blue were not written by me.
@(anonymous): Have you been refreshing the page? Every time you refresh it resets MENACE to before it has learnt anything.

It takes around 80 games for MENACE to learn against the perfect AI. So it could be you've not left it playing for long enough? (Try turning the speed up to watch MENACE get better.)
Matthew
                 Reply
I have played around menace a bit and frankly it doesnt seem to be learning i occasionally play with it and it draws but againt the perfect ai you dont see as many draws, the perfect ai wins alot more
(anonymous)
                 Reply
@Colin: You can set MENACE playing against MENACE2 (MENACE that plays second) on the interactive MENACE. MENACE2's starting numbers of beads and incentives may need some tweaking to give it a chance though; I've been meaning to look into this in more detail at some point...
Matthew
                 Reply
Idle pondering (and something you may have covered elsewhere): what's the evolution as MENACE plays against itself? (Assuming MENACE can play both sides.)
Colin
                 Reply
 Add a Comment 


I will only use your email address to reply to your comment (if a reply is needed).

Allowed HTML tags: <br> <a> <small> <b> <i> <s> <sup> <sub> <u> <spoiler> <ul> <ol> <li> <logo>
To prove you are not a spam bot, please type "l" then "i" then "n" then "e" then "a" then "r" in the box below (case sensitive):

Archive

Show me a random blog post
 2024 

Feb 2024

Zines, pt. 2

Jan 2024

Christmas (2023) is over
 2023 
▼ show ▼
 2022 
▼ show ▼
 2021 
▼ show ▼
 2020 
▼ show ▼
 2019 
▼ show ▼
 2018 
▼ show ▼
 2017 
▼ show ▼
 2016 
▼ show ▼
 2015 
▼ show ▼
 2014 
▼ show ▼
 2013 
▼ show ▼
 2012 
▼ show ▼

Tags

inline code game show probability harriss spiral weather station tennis plastic ratio numerical analysis map projections pizza cutting dragon curves finite group christmas card cross stitch anscombe's quartet manchester sorting stirling numbers python runge's phenomenon graphs pi approximation day newcastle recursion folding paper latex databet triangles matrix of minors programming chebyshev draughts european cup graph theory errors inverse matrices manchester science festival gaussian elimination a gamut of games flexagons correlation coins speed books news bempp the aperiodical probability mathsjam geogebra standard deviation rhombicuboctahedron gather town machine learning captain scarlet turtles finite element method mathsteroids chalkdust magazine fence posts edinburgh menace games hats game of life geometry cambridge matrix of cofactors boundary element methods fractals oeis crochet youtube 24 hour maths dataset polynomials national lottery asteroids logs statistics royal institution sound bubble bobble matt parker arithmetic go convergence simultaneous equations data visualisation pascal's triangle raspberry pi hexapawn christmas noughts and crosses javascript pi data chess mathslogicbot dinosaurs wave scattering martin gardner palindromes sport ucl logo pac-man php live stream bodmas reuleaux polygons accuracy logic crossnumber london datasaurus dozen electromagnetic field guest posts nine men's morris gerry anderson approximation golden spiral ternary braiding puzzles hannah fry wool squares computational complexity final fantasy tmip platonic solids zines curvature pythagoras football weak imposition reddit misleading statistics determinants sobolev spaces signorini conditions big internet math-off folding tube maps frobel video games matrices error bars radio 4 talking maths in public interpolation golden ratio estimation phd stickers realhats people maths countdown trigonometry propositional calculus advent calendar world cup light numbers mean rugby binary preconditioning royal baby exponential growth dates craft fonts quadrilaterals london underground hyperbolic surfaces matrix multiplication

Archive

Show me a random blog post
▼ show ▼
© Matthew Scroggs 2012–2024