mscroggs.co.uk
mscroggs.co.uk

subscribe

Blog

 2019-12-27 
In tonight's Royal Institution Christmas lecture, Hannah Fry and Matt Parker demonstrated how machine learning works using MENACE.
The copy of MENACE that appeared in the lecture was build and trained by me. During the training, I logged all the moved made by MENACE and the humans playing against them, and using this data I have created some visualisations of the machine's learning.
First up, here's a visualisation of the likelihood of MENACE choosing different moves as they play games. The thickness of each arrow represented the number of beads in the box corresponding to that move, so thicker arrows represent more likely moves.
The likelihood that MENACE will play each move.
There's an awful lot of arrows in this diagram, so it's clearer if we just visualise a few boxes. This animation shows how the number of beads in the first box changes over time.
The beads in the first box.
You can see that MENACE learnt that they should always play in the centre first, an ends up with a large number of green beads and almost none of the other colours. The following animations show the number of beads changing in some other boxes.
MENACE learns that the top left is a good move.
MENACE learns that the middle right is a good move.
MENACE is very likely to draw from this position so learns that almost all the possible moves are good moves.
The numbers in these change less often, as they are not used in every game: they are only used when the game reached the positions shown on the boxes.
We can visualise MENACE's learning progress by plotting how the number of beads in the first box changes over time.
The number of beads in MENACE's first box.
Alternatively, we could plot how the number of wins, loses and draws changes over time or view this as an animated bar chart.
The number of games MENACE wins, loses and draws.
The number of games MENACE has won, lost and drawn.
If you have any ideas for other interesting ways to present this data, let me know in the comments below.

Similar posts

Building MENACEs for other games
MENACE at Manchester Science Festival
MENACE
MENACE in fiction

Comments

Comments in green were written by me. Comments in blue were not written by me.
@(anonymous): Have you been refreshing the page? Every time you refresh it resets MENACE to before it has learnt anything.

It takes around 80 games for MENACE to learn against the perfect AI. So it could be you've not left it playing for long enough? (Try turning the speed up to watch MENACE get better.)
Matthew
                 Reply
I have played around menace a bit and frankly it doesnt seem to be learning i occasionally play with it and it draws but againt the perfect ai you dont see as many draws, the perfect ai wins alot more
(anonymous)
                 Reply
@Colin: You can set MENACE playing against MENACE2 (MENACE that plays second) on the interactive MENACE. MENACE2's starting numbers of beads and incentives may need some tweaking to give it a chance though; I've been meaning to look into this in more detail at some point...
Matthew
                 Reply
Idle pondering (and something you may have covered elsewhere): what's the evolution as MENACE plays against itself? (Assuming MENACE can play both sides.)
Colin
                 Reply
 Add a Comment 


I will only use your email address to reply to your comment (if a reply is needed).

Allowed HTML tags: <br> <a> <small> <b> <i> <s> <sup> <sub> <u> <spoiler> <ul> <ol> <li>
To prove you are not a spam bot, please type "uncountable" in the box below (case sensitive):

Archive

Show me a random blog post
 2020 

Jul 2020

Happy τ+e-6 Approximation Day!

May 2020

A surprising fact about quadrilaterals
Interesting tautologies

Mar 2020

Log-scaled axes

Feb 2020

PhD thesis, chapter ∞
PhD thesis, chapter 5
PhD thesis, chapter 4
PhD thesis, chapter 3
Inverting a matrix
PhD thesis, chapter 2

Jan 2020

PhD thesis, chapter 1
Gaussian elimination
Matrix multiplication
Christmas (2019) is over
 2019 
▼ show ▼
 2018 
▼ show ▼
 2017 
▼ show ▼
 2016 
▼ show ▼
 2015 
▼ show ▼
 2014 
▼ show ▼
 2013 
▼ show ▼
 2012 
▼ show ▼

Tags

logic news numerical analysis people maths hats world cup exponential growth menace map projections quadrilaterals matrix multiplication gerry anderson matrices curvature trigonometry christmas manchester science festival finite element method christmas card sobolev spaces hexapawn puzzles latex games speed bubble bobble accuracy matrix of cofactors matt parker raspberry pi final fantasy european cup pi go twitter weak imposition determinants data visualisation nine men's morris platonic solids probability dataset javascript countdown frobel braiding oeis folding paper graphs interpolation gaussian elimination arithmetic machine learning manchester geometry royal baby pizza cutting chess pi approximation day craft asteroids programming error bars captain scarlet a gamut of games palindromes dragon curves ternary realhats triangles wave scattering dates mathsteroids reuleaux polygons squares talking maths in public wool propositional calculus harriss spiral stickers pac-man football electromagnetic field reddit computational complexity video games advent calendar tmip game of life chalkdust magazine polynomials folding tube maps london light boundary element methods inline code sound big internet math-off statistics sport pythagoras the aperiodical hannah fry logs radio 4 tennis martin gardner bempp signorini conditions estimation cross stitch royal institution geogebra approximation plastic ratio coins sorting flexagons mathsjam bodmas national lottery preconditioning matrix of minors cambridge london underground binary noughts and crosses misleading statistics rugby chebyshev php data weather station inverse matrices fractals golden spiral graph theory books game show probability convergence draughts python ucl phd rhombicuboctahedron simultaneous equations golden ratio mathslogicbot

Archive

Show me a random blog post
▼ show ▼
© Matthew Scroggs 2012–2020