Sniff is a "Scratch-like" programming language that's designed to help Scratchers move gently from Scratch to more conventional languages. They can start writing programs, without having to learn a new language because Sniff is based on Scratch. They learn a little more about variables, compiling, syntax errors (!), and they can have fun controlling real hardware while they're doing it.

Tuesday, 7 April 2015

Arduino Dice

In the previous post I outlined the hardware of the multifunction shield. The plan now is to build a few simple projects based around it, so lets start with something really simple. We have a button, and we have a screen, so lets generate some random numbers and make an electronic dice.

make button1 digital input A1
make segLatch digital output 4
make segClock digital output 7
make segData digital output 8

make segVal number
when shiftOut
.repeat 8
..if segVal > 127
...set segData to on
...set segVal to segVal-128
...set segData to off
..set segVal to segVal*2
..set segClock to on
..set segClock to off

make digit number
make digitVal number
make decimalP boolean
when showDigit
.set segVal to 255
.if digitVal=0
..set segVal to 192
.if digitVal=1
..set segVal to 249
.if digitVal=2
..set segVal to 164
.if digitVal=3
..set segVal to 176
.if digitVal=4
..set segVal to 153
.if digitVal=5
..set segVal to 146
.if digitVal=6
..set segVal to 130
.if digitVal=7
..set segVal to 248
.if digitVal=8
..set segVal to 128
.if digitVal=9
..set segVal to 144
.if decimalP
..change segVal by -128
.set segLatch to off
.broadcast shiftOut and wait
.if digit=1
..set segVal to 241
.if digit=2
..set segVal to 242
.if digit=3
..set segVal to 244
.if digit=4
..set segVal to 248
.broadcast shiftOut and wait
.set segLatch to on
.wait 2 millisecs

Everything above is standard code which sets up the 7 segment display on the multifunction shield. Show digit allows us to display a value on any of the segments, with an optional decimal point.

Normally we'd also include code to display the number in the string message on the digits of the display but as we just want to display a single digit, we can simplify (and make it look more fun) by displaying the same value on all four digits:

when start
.set decimalP to no
..repeat 4 using digit
...broadcast showDigit and wait

Now to generate a random number every time we press the button:

when start
..wait until not button1
..set digitVal to pick random 1 to 6
..wait 0.1 secs
..wait until button1
..wait 0.1 secs

We wait until the button is pressed (logic low), and set the message to the value of the number.

After the button is pressed and we've updated the digitVal, we wait 0.1 seconds then wait for the button to go high again. This is to "debounce" the switch - when you press a switch it can bounce a little, so flips high and low for a fraction of a second before being firmly pressed. Adding a little delay stops this happening.

Now try it:


Never gamble with a computer! There's a serious problem. Every time you run the code you get exactly the same results. Well its a computer after all - programming depends on computers doing the same thing over and over again. When you ask a computer to generate a random number, it runs some code to "make something up", but every time that code is run it does the same thing, so we always end up with the same sequence!

In fact its really hard to generate random numbers with a computer. Why do you think the lottery uses balls with numbers written on them, rather than something "smarter"? There are actually companies that will sell you random numbers (they let you have them free for non-commercial use!).

We need to add some genuinely random element to the system - like a user. Here's an alternative version of the code:

when start
.forever if not button1
..set digitVal to pick random 1 to 6
..wait 0.01 secs

We pick a new random number ever 100th of a second until the user lets go of the button. As the numbers are changing to fast for us to see, and we can't accurately move/remove our finger from the button, this  is pretty random.

But how could we test it?

If you were just interested in a little light hardware hacking, you should probably stop reading now... I'm about to go back into a whole load of maths!!! Sorry... bad habit.

Well we could start by throwing a sequence of numbers and seeing what it looks like:


I generated 24 numbers, which means we'd expect to see 4 of each number:


Not even close! I got hardly any 2's or 4's. What's the chance of rolling a dice 24 times and only getting one 4 on the very last go? Well the chance of not getting a 4 on the first go is 5/6, on two rolls it's 5/6*5/6 and so on, which means that the chances of getting 23 non-4's followed by a 4 would be (5/6)^23 * 1/6 or about 0.25% - not very likely.

But not having many fours is just one of the many thing we might notice. There are 6 values on the dice. Any go them only coming up only once would be equally suspicious. Or not coming up at all. or coming up as the first and last number, or coming up in a massive streak in the middle - there are lots of "interesting" patterns we could spot. In fact perhaps the most "interesting" thing that could happen is for nothing interesting to happen. If I didn't see at least something suspicious I'd be very suspicious!

This is sometimes called the Texas sharp shooter fallacy - after a guy who fires a load of bullets into the side of a barn, then walks over and paints a target around the largest cluster. We need to say what we're looking for prior to collecting the data. Here we've seen something interesting in the data, and calculated the chance of it happening - except of course the real chance of it happening is 1. IT ALREADY HAPPENED. We started with a sequence with only one 4. Thats what made us look for only one 4, so of course we found only one 4!!!!

So you do we tell if that sequence is dodgy? To do it properly we need to start with our assumption (called the null hypothesis), and then look for evidence that this is not true! In this case our null hypothesis is that the dice is fair, and for any number is equally likely on only given throw. Then, we throw the dice, and then we do a Chi Squared test.

OK - that looks scary. But lets breakout down. Ei is the number of times we expect a number to appear - we threw 24 times, and have 6 sides, so Ei=4. Oi is the number is the number of times it actuallyy happened (Observed), so for each side we calculate (O-4)/4 and then add them up.  Heres the calculation for the previous run:

1:6: (6-4)^2/4=1
2:1: (1-4)^2/4=2.25
3:7: (7-4)^2/4=2.25
4:1: (1-4)^2/4=2.25
5:4: (4-4)^2/4=0
6:5: (5-4)^2/4=0.5

Adding these gives us chi squared value of 8.25. We then compare this to a value from a table. The only catch is that there are several versions of the table depending on the "degree's of freedom". Degrees of freedom is just a fancy way of saying how many different things can change and in this case out dice has 6 sides, so we have six values that can change... BUT the last side doesn't count! Why? Because if I tell you I threw 24 times, and I tell you how many times I got the values 1-5 you already know the final answer, so the degree's of freedom is actually only 5.

Probability less than the critical value

Here's the table we need to look things up in. If the number we get is bigger than the one in the table, then our "null hypothisis"is unlikely. If we get value of greater than 11 then there's only a 5% chance of getting that result if the dice is fair, and we "reject the null hypothosis".

We got a value less that 9.2, so its fair right? No!!!!! We haven't shown its fair - just that we haven't got enough to prove that its NOT fair. Even worse - even if it is fair we'd still think it was unfair 10% of the time!

So lets try and collect more evidence.

make counts list of numbers

make trials number

make rolled number
make tmp number

make expected number
make chi number

when rollFairDice
.delete all of counts
.repeat 6
..add 0 to counts
.repeat trials
..set rolled to pick random 1 to 6
..set tmp to item rolled of counts
..change tmp by 1
..replace item rolled of counts with tmp
when start
.set trials to 10*6
.say join "Trials:"[trials]
.broadcast rollFairDice and wait
.set expected to trials/6
.say join "Expected:"[expected]
.set chi to 0
.repeat 6 using rolled
..say join join [rolled] ":" [item rolled of counts]
..set tmp to item rolled of counts
..set tmp to tmp-expected
..set tmp to tmp * tmp
..set tmp to tmp/expected
..change chi by tmp
.say join "chi:"[chi]

This code "rolls the dice" 60 times (so we'd expect to get an average value of 10 for each face), and on my Mac produces:


600 trials

6000 trials

We can see that the chi squared value is changing but not in any obvious way - different values of trials can produce some bigger values. Remember that just by chance, we'll get a value of 15 or more 1 time in 100, no matter how many trials we have!

What lots of trials will to is make that number go super big by accumulating evidence, if there is anything fishy going on. Here we're looking for evidence of impropriety when in fact there is none, which is pretty dull... so lets start cheating:

when rollBiasDice
.delete all of counts
.repeat 6
..add 0 to counts
.repeat trials
..set rolled to pick random 1 to 7
..if rolled=7
...set rolled to 6
..set tmp to item rolled of counts
..change tmp by 1
..replace item rolled of counts with tmp

Now 6 will come up twice as often as any other. Lets roll 24 numbers like we did before:



It really doesn't look that different! Chi squared of 3.5 means the test pulled up nothing suspicious, so we're all good right? NO! we just haven't got enough evidence yet. We KNOW we're cheating, and 6 has come up more times than we'd expect, but the nature of probability is that its very likely that this could happen by chance (in fact 3 has come up more than 6, and that is purely by chance).

Lets do 120 rolls:

Here we're starting to see 6 pull clearly ahead, and the chi squared  is reflecting that, but statistically we're not there yet. If we're running a casino, then we're looking out for lots of 6's, but that's not what we're testing here - a high or low score of any number would be sufficient to reject the null hypothisis that the dice is fair, so there are many ways we could get a result this "interesting".  Maybe we should have used the null hypothosis "6 appears with a probability of 1/6" but that would have mean different numbers for the test, which may produce a higher chi squared. Remember the sharp-shooter: we're not allowed to change the test after we've seen the results!

But look what happens when we put the number of trials up to 240:

Chi squared keeps going up because now we really are collecting more evidence - but looking at the table, this (or something "equally interesting") could still happen more than 1 time in 100.

Going to 480 trials finally kicks us over the edge:

I think the take away here is that you can't prove the null hypothosis - only collect evidence to reject it. When we ran our tests with a fair dice, nothing much happened. Increasing the trials didn't change anything - if all is fair, then chi squared will be bigger or smaller at random - we keep testing, but we never find more or less evidence, because there's none to find. We can't produce evidence that the dice is unfair, because there's no evidence to find! However when we did have some thing dogey going on, increasing trials gave us more evidence.

There's a guy who takes this pretty seriously and built a machine to roll dice, and then read the values of with a camera, and image recognition system!

This really was supposed to be a fun/simple hardware build, but I guess it got side tracked, but I think it was worth it. You really should simulate a bias dice (make it more or less bias), and see how long it takes to detect. Running stuff like this in code helps you get a much better feel for the maths.

Also while Chi Squared is A level maths, the actually implementation is pretty easy. Implementing this sort of code is something you could do at KS4, and it would make a lot of sense if tied to some kind of practical work.

No comments:

Post a Comment