mscroggs.co.uk
mscroggs.co.uk

subscribe

Blog

Visualising MENACE's learning

 2019-12-27 
In tonight's Royal Institution Christmas lecture, Hannah Fry and Matt Parker demonstrated how machine learning works using MENACE.
The copy of MENACE that appeared in the lecture was build and trained by me. During the training, I logged all the moved made by MENACE and the humans playing against them, and using this data I have created some visualisations of the machine's learning.
First up, here's a visualisation of the likelihood of MENACE choosing different moves as they play games. The thickness of each arrow represented the number of beads in the box corresponding to that move, so thicker arrows represent more likely moves.
The likelihood that MENACE will play each move.
There's an awful lot of arrows in this diagram, so it's clearer if we just visualise a few boxes. This animation shows how the number of beads in the first box changes over time.
The beads in the first box.
You can see that MENACE learnt that they should always play in the centre first, an ends up with a large number of green beads and almost none of the other colours. The following animations show the number of beads changing in some other boxes.
MENACE learns that the top left is a good move.
MENACE learns that the middle right is a good move.
MENACE is very likely to draw from this position so learns that almost all the possible moves are good moves.
The numbers in these change less often, as they are not used in every game: they are only used when the game reached the positions shown on the boxes.
We can visualise MENACE's learning progress by plotting how the number of beads in the first box changes over time.
The number of beads in MENACE's first box.
Alternatively, we could plot how the number of wins, loses and draws changes over time or view this as an animated bar chart.
The number of games MENACE wins, loses and draws.
The number of games MENACE has won, lost and drawn.
If you have any ideas for other interesting ways to present this data, let me know in the comments below.

Similar posts

Building MENACEs for other games
MENACE at Manchester Science Festival
MENACE
MENACE in fiction

Comments

Comments in green were written by me. Comments in blue were not written by me.
@(anonymous): Have you been refreshing the page? Every time you refresh it resets MENACE to before it has learnt anything.

It takes around 80 games for MENACE to learn against the perfect AI. So it could be you've not left it playing for long enough? (Try turning the speed up to watch MENACE get better.)
Matthew
                 Reply
I have played around menace a bit and frankly it doesnt seem to be learning i occasionally play with it and it draws but againt the perfect ai you dont see as many draws, the perfect ai wins alot more
(anonymous)
                 Reply
@Colin: You can set MENACE playing against MENACE2 (MENACE that plays second) on the interactive MENACE. MENACE2's starting numbers of beads and incentives may need some tweaking to give it a chance though; I've been meaning to look into this in more detail at some point...
Matthew
                 Reply
Idle pondering (and something you may have covered elsewhere): what's the evolution as MENACE plays against itself? (Assuming MENACE can play both sides.)
Colin
                 Reply
 Add a Comment 


I will only use your email address to reply to your comment (if a reply is needed).

Allowed HTML tags: <br> <a> <small> <b> <i> <s> <sup> <sub> <u> <spoiler> <ul> <ol> <li>
To prove you are not a spam bot, please type "j" then "u" then "m" then "p" in the box below (case sensitive):

Archive

Show me a random blog post
 2020 

Jul 2020

Happy τ+e-6 Approximation Day!

May 2020

A surprising fact about quadrilaterals
Interesting tautologies

Mar 2020

Log-scaled axes

Feb 2020

PhD thesis, chapter ∞
PhD thesis, chapter 5
PhD thesis, chapter 4
PhD thesis, chapter 3
Inverting a matrix
PhD thesis, chapter 2

Jan 2020

PhD thesis, chapter 1
Gaussian elimination
Matrix multiplication
Christmas (2019) is over
 2019 
▼ show ▼
 2018 
▼ show ▼
 2017 
▼ show ▼
 2016 
▼ show ▼
 2015 
▼ show ▼
 2014 
▼ show ▼
 2013 
▼ show ▼
 2012 
▼ show ▼

Tags

computational complexity sound sobolev spaces talking maths in public palindromes machine learning chebyshev football chalkdust magazine stickers data matrix of cofactors sport braiding accuracy cambridge matrix multiplication mathsteroids final fantasy harriss spiral light determinants books misleading statistics royal institution golden ratio asteroids rugby hats games fractals weak imposition twitter tennis php wool binary platonic solids news trigonometry exponential growth graph theory golden spiral raspberry pi mathslogicbot numerical analysis polynomials london underground video games craft finite element method noughts and crosses graphs european cup data visualisation puzzles a gamut of games draughts weather station sorting convergence probability approximation rhombicuboctahedron game show probability royal baby matrices interpolation reuleaux polygons geogebra coins statistics gaussian elimination martin gardner manchester science festival pythagoras london christmas card world cup dataset latex matt parker pi speed game of life arithmetic bodmas propositional calculus python quadrilaterals radio 4 estimation hannah fry folding paper advent calendar mathsjam nine men's morris christmas menace dates tmip ternary map projections pac-man triangles the aperiodical wave scattering flexagons countdown ucl signorini conditions curvature go people maths frobel matrix of minors manchester plastic ratio javascript logic realhats simultaneous equations squares geometry inline code national lottery bempp oeis phd preconditioning chess cross stitch programming logs inverse matrices dragon curves big internet math-off hexapawn reddit gerry anderson folding tube maps error bars pi approximation day captain scarlet pizza cutting electromagnetic field boundary element methods bubble bobble

Archive

Show me a random blog post
▼ show ▼
© Matthew Scroggs 2012–2020