mscroggs.co.uk
mscroggs.co.uk

subscribe

Blog

 2019-12-27 
In tonight's Royal Institution Christmas lecture, Hannah Fry and Matt Parker demonstrated how machine learning works using MENACE.
The copy of MENACE that appeared in the lecture was build and trained by me. During the training, I logged all the moved made by MENACE and the humans playing against them, and using this data I have created some visualisations of the machine's learning.
First up, here's a visualisation of the likelihood of MENACE choosing different moves as they play games. The thickness of each arrow represented the number of beads in the box corresponding to that move, so thicker arrows represent more likely moves.
The likelihood that MENACE will play each move.
There's an awful lot of arrows in this diagram, so it's clearer if we just visualise a few boxes. This animation shows how the number of beads in the first box changes over time.
The beads in the first box.
You can see that MENACE learnt that they should always play in the centre first, an ends up with a large number of green beads and almost none of the other colours. The following animations show the number of beads changing in some other boxes.
MENACE learns that the top left is a good move.
MENACE learns that the middle right is a good move.
MENACE is very likely to draw from this position so learns that almost all the possible moves are good moves.
The numbers in these change less often, as they are not used in every game: they are only used when the game reached the positions shown on the boxes.
We can visualise MENACE's learning progress by plotting how the number of beads in the first box changes over time.
The number of beads in MENACE's first box.
Alternatively, we could plot how the number of wins, loses and draws changes over time or view this as an animated bar chart.
The number of games MENACE wins, loses and draws.
The number of games MENACE has won, lost and drawn.
If you have any ideas for other interesting ways to present this data, let me know in the comments below.
                  ×1      
(Click on one of these icons to react to this blog post)

You might also enjoy...

Comments

Comments in green were written by me. Comments in blue were not written by me.
@(anonymous): Have you been refreshing the page? Every time you refresh it resets MENACE to before it has learnt anything.

It takes around 80 games for MENACE to learn against the perfect AI. So it could be you've not left it playing for long enough? (Try turning the speed up to watch MENACE get better.)
Matthew
                 Reply
I have played around menace a bit and frankly it doesnt seem to be learning i occasionally play with it and it draws but againt the perfect ai you dont see as many draws, the perfect ai wins alot more
(anonymous)
                 Reply
@Colin: You can set MENACE playing against MENACE2 (MENACE that plays second) on the interactive MENACE. MENACE2's starting numbers of beads and incentives may need some tweaking to give it a chance though; I've been meaning to look into this in more detail at some point...
Matthew
                 Reply
Idle pondering (and something you may have covered elsewhere): what's the evolution as MENACE plays against itself? (Assuming MENACE can play both sides.)
Colin
                 Reply
 Add a Comment 


I will only use your email address to reply to your comment (if a reply is needed).

Allowed HTML tags: <br> <a> <small> <b> <i> <s> <sup> <sub> <u> <spoiler> <ul> <ol> <li> <logo>
To prove you are not a spam bot, please type "odd" in the box below (case sensitive):

Archive

Show me a random blog post
 2024 

Feb 2024

Zines, pt. 2

Jan 2024

Christmas (2023) is over
 2023 
▼ show ▼
 2022 
▼ show ▼
 2021 
▼ show ▼
 2020 
▼ show ▼
 2019 
▼ show ▼
 2018 
▼ show ▼
 2017 
▼ show ▼
 2016 
▼ show ▼
 2015 
▼ show ▼
 2014 
▼ show ▼
 2013 
▼ show ▼
 2012 
▼ show ▼

Tags

chalkdust magazine binary puzzles cambridge determinants matrices stirling numbers matt parker exponential growth map projections youtube light gerry anderson crochet game show probability simultaneous equations graph theory weak imposition matrix of cofactors fonts reddit royal institution a gamut of games london plastic ratio chess recursion runge's phenomenon zines datasaurus dozen graphs royal baby logo probability newcastle convergence rugby cross stitch tmip big internet math-off guest posts pizza cutting numerical analysis the aperiodical mean games hannah fry interpolation speed propositional calculus ternary harriss spiral folding paper draughts talking maths in public weather station polynomials frobel sport live stream wave scattering rhombicuboctahedron people maths dates boundary element methods anscombe's quartet dataset accuracy books inline code dragon curves bubble bobble 24 hour maths braiding phd national lottery platonic solids nine men's morris ucl craft countdown crossnumber inverse matrices databet mathsteroids golden ratio flexagons standard deviation go wool asteroids error bars geogebra python latex pi approximation day numbers tennis trigonometry electromagnetic field logs final fantasy pi folding tube maps pac-man realhats mathslogicbot bodmas geometry preconditioning sound video games london underground logic hyperbolic surfaces noughts and crosses hats coins data gaussian elimination edinburgh pascal's triangle data visualisation php quadrilaterals finite group misleading statistics radio 4 world cup mathsjam manchester science festival errors fence posts news gather town correlation golden spiral dinosaurs captain scarlet christmas fractals football bempp advent calendar stickers sorting christmas card curvature signorini conditions menace manchester javascript programming oeis finite element method matrix of minors palindromes squares triangles turtles sobolev spaces computational complexity arithmetic estimation martin gardner european cup reuleaux polygons pythagoras chebyshev matrix multiplication statistics machine learning hexapawn raspberry pi approximation game of life

Archive

Show me a random blog post
▼ show ▼
© Matthew Scroggs 2012–2024