Sniff is a "Scratch-like" programming language that's designed to help Scratchers move gently from Scratch to more conventional languages. They can start writing programs, without having to learn a new language because Sniff is based on Scratch. They learn a little more about variables, compiling, syntax errors (!), and they can have fun controlling real hardware while they're doing it.

Monday, 8 December 2014

A simpler Christmas tree!

While the BU Xmas tree was a lot of fun, it was a shame we didn't have the time to implement a neoPixels solution. If you're planning something on a more domestic scale then its really easy to make christmas tree lights that will amaze you friends and family!

I started with these: a string of 50 12mm neoPixels (which I got from Embedded Adventures, but they're available elsewhere). They're called 12mm because the housing is 12mm diameter, but actually the led bit is about 8mm. Anyway they're perfect for xmas lights!

To wire them up, need to first work out which end of the string is the input - they're not labelled, but you can see a chip inside the housing. The input is the same side as the chip, while the output is on the bottom.

Once you've found the input end you're faced with three wires connected to a plug, and two bare ends. The bare ends are the power, and at least in the ones I have, earth is blue and 5V is red. Apart from the red thing, the other clue to confirm this is that the bare earth wire should match the earth wire thats connected to the plug (i.e. I've got 2 blue wires). You should wire these up to a decent 5v power source - USB/arduino regulator isn't a great idea, as they're simply not powerful enough.

To wire the plug up you can buy suitable cables but I just used three breadboard jumper cables. The earth is the important one. This leaves data and clock. If you get these wrong, then nothing bad happens - it just doesn't work. Plug them into pins 2 and 3 of an arduino and you're good to go.

Data and clock you say? I thought neoPixels ran from a single pin? Well yes, ws2811 neoPixels do, but these larger strings tend to use ws2801 chips which have a separate clock. The good news is that make the MUCH easier to code for so the sniff driver for them should be more portable.

We've covered neoPixels before, and the only change is that now we need to create the device as:

make neoPixel ws2801 device 2

which places data and clock on pins 2 and 3 (n,n+1). To use the same code with ws2811's just make:

make neoPixel ws2811 device 2

Now to make the ultimate xmas tree lights:

make neoPixel ws2801 device 2
make neoColor list of number

make ledA number
make ledOffset number
make ledCount number

when start
.set ledCount to 50
..set ledA to 1
..delete all of neoColor
..repeat ledCount
...add ((ledA/ledCount*(sin of timer)+timer*0.1)*360) mod 360 to neoColor
...change ledA by 1

..tell neoPixel to "show"

All we have to do is fill in the neoColour list with the colour we want for each LED. It uses 0-360 to represent the colour.

The colour changes in two ways: as we move along the string ledA increments, so we get colour changes along the string. However we use sin of timer to control how much it changes along the string. Over 360 seconds it will go from being constant along the string to changing a lot, and then back again.

We also add timer*0.1 so that the colours animate march along the string. Finally we scale by 360, and them mod 360 so that the answer is in the correct range.

I've just let them free run, but you could easily add IR remote control, or use a PIR so the lights change when someone goes near the tree!

Release 13 - for Xmas projects!

Release 13 is now available for download from the downloads page!

It includes some minor tweaks and fixes. It also includes source for the BU Xmas tree project.

It also includes support for WS2801 type neoPixels which use data and clock pins. There are some great devices out there that use this type that are perfect for xmas decorations. If you're using ws2811 type, then you'll now need to create them using:

make neoPixel ws2811 device 2

The code should then work as before.

Monday, 1 December 2014

Merry Xmas

About a week ago I got an email from the University marketing and PR department. This is odd because I don't know anyone in marketing and PR, but someone had suggested they contact me about a potential project... Could we build a christmas tree that responds to tweets so that students could message it, and somehow vote on its colour? TRY AND STOP ME!

There are a few similar projects out there, including a few scary ones that switch 240v with an arduino using relays... Estates are NOT going to let me do that, and in any case there's no way I'm building my own mains switching equipment! So of course my thoughts immediately went to neoPixels. The 2812 strips might work OK, but if I wrote the code to drive the older 2801 then there are strings of pixels that are basically xmas tree lights. The largest ones available are 45mm diameter, so they would be perfect for even a pretty big tree.

My only worries at this stage were setting up a power supply for a lot of these, and getting them ordered and working in time... at this point I had no more information but I was pretty sure they would want their xmas tree some time in december, and it was now late Nov. I had to run as much background preparation as possible, so I was ready if this thing actually happened.

For the controller I decided to use an Arduino Yun - the Linux side would be able to pull the messages from twitter, and then the AVR side would control the lights.

AVR Side (take 1)

To communicate between the two sides of  the Yun we need to use the Bridge device. This can do a lot of neat things, but one of them is that it can create a shared dictionary. This holds a list of names, and for each name is a value. Both the AVR and Linux can read and modify the values. On the AVR we can write:

make bridge device

make bridgeKeystore device
make bridgeKey string
make bridgeValue string

...set bridgeKey to "green"
...tell bridgeKeystore to "get"
...set greenBrightness to value of bridgeValue

While on the Linux side we can write to these though the web server's REST API (a url that controls something): http://arduino.local.:/data/put/green/42

I hooked all this together and got something that animated a line of neoPixels in any number of different colours, with the number of each colour being controlled by values put in the bridge keystore.

Linux Side

That just left pulling the tweets down and counting them... It turns out that while this is easy in principle - twitter provide an API, so you just fetch the results from a URL there's a gotcha... you need to authenticate first! There are tools and libraries to help, but they all tend to have dependancies, so to get one working, your need a bunch of other things working, which need other things... Fine on a desktop machine, but more of a problem on the Yun, where the available packages are limited. Then I found this excellent script. It pulls down a list of tweets (which is easy), but more importantly it authenticates, using openssl to do the work. That means it runs as is on the Yun (you need to first run: opkg install opensll)

I made a few mods so that it would pull down mentions, so people could tweet @bournemouthuni, and we'd find it.

The next problem is that the results are in JSON. We need to pull out the actually messages from that. Once again there are a bunch of tools and librares, but we need a lightweight one that runs on Yun. "jq" seems pretty excellent and I used it for testing, but to run on the you, I used "python -mjson.tool". That pretty prints the json and we can pull out the results.

So I've now got a working prototype, which picks up tweets and drives neoPixels... time for a meeting with marketing!

Scaling things up...

At this point Nat (the marketing guy) casually dropped the slightly significant information the university wasn't using their usual 3m tree, but were expecting delivery of a 6m tree... in the next few days!

Now in principle this doesn't change the basic tech, but from a practical point of view I can order 100 or so large neoPixels, keep them powered, and climb on a ladder to install them on a 3m tree. Assuming we could get hold of 1000 jumbo neoPixels (double height=8x volume) within a week, and power them, there's still the slight issue of getting them 6m in the air.

Nat has prepared for this, and a team of professional Xmas light installers (yes apparently that's  a career path that nobody told you about in high school!) will be handling the lights themselves. Given the short timescale, we'll need to rely on them to source and install LED lights, leaving us to control them.

AVR Side take 2

Fortunatly I had a plan B all along. A few frantic calls to the Xmas tree guy (who in turn calls their electrical guy) confirms that they can provide three strings of lights, each in a different colour, and that these lights are mains powered and dimmable.

Linux side code stays the same, and drives the keystore... but instead of driving neoPixels we're going to control the whole thing with a DMX dimmer pack. This means we can safely switch mains voltages and produce nice cross fade effects easily. Best of all Sniff supports DMX. We just need to assign a channel to each string (1,2,3) and then calculate the brightness value for each channel.

We could just set the brightness based on the votes, but that would look dull. Far better to flash them on and off. If we were just switching them I'd use very slow pulse width modulation so that over say 10 seconds, a colour with 60% of the vote would turn on for 6 seconds, while one with only 10% would only turn on for 1.

However we want them to fade in and out... To do that I'd use 0.5*(sin(t)+1) to produce something that goes from dark to light smoothly. If I want it darker I'd square it (numbers less that 1 get smaller, so the signal gets more pointy), or if I want it brighter I'd square root it (all values get closer to 1).

But how do we control it to represent a share of the vote? Roughly speaking, we want the integral(f(x)) to be the share of the vote, where f(x) is of the form (0.5*(sin(t)+1))^p. The question is how does p relate to the integral? Integrals are hard, so lets just write a program to do it:

when start

.set p to 0
.repeat 10
..set total to 0
..set x to 1
..repeat 360
...set baseVal to ((sin of x)+1)*0.5
...if baseVal > 0 
....change total by e^ of (p*ln of baseVal)        
...change x by 1
..say join join [p]"," [total]
..change p by 1

Throwing this in to a spreadsheet, and plotting a graph or two reveals that the integral is inversely related to the power (pretty obviously - higher powers mean a spikier graph, with less area). A but more tweaking shows than plotting the integral against 1/(p^2) looks like this:

That's not linear, but its pretty close! Roughly speaking area (power/brightness - lets not sweat that detail) is roughly proportional to 1/(p*p). The remaining curvature is far less than the non-linearity we're going to see in the lighting controller, and the lights themselves.

Here's the test DMX code which flashes channels 1,2 & 3 at slightly different speeds, but with a brightness ration of about 0.2,0.3,0.5:
make dmx device
make dmxData list of number

make val number
make power number

when pow
.if power=0
..set val to 1
..set val to e^ of (power * ln of val)

make share number
make freq number
make period number

when start
.add 0 to dmxData
.add 0 to dmxData
.add 0 to dmxData
.add 255 to dmxData
.set period to 8
.set freq to 360/period
.broadcast runDMX
..set share to 0.2*1.5
..set power to 1/(share*share)
..set val to 0.5*(sin of (timer * freq )+1)
..broadcast pow and wait
..replace item 1 of dmxData with val*255
..set share to 0.3*1.5
..set power to 1/(share*share)
..set val to 0.5*(sin of (timer * freq*1.1)+1)
..broadcast pow and wait
..replace item 2 of dmxData with val*255
..set share to 0.5*1.5
..set power to 1/(share*share)
..set val to 0.5*(sin of (timer * freq*0.9)+1)
..broadcast pow and wait
..replace item 3 of dmxData with val*255
..wait 0.02 secs

when runDMX
..tell dmx to "tick"

We actually set the share to be slightly bigger than it should be as that allows us to increase the amount of light slightly: we want the lights to be on slightly more than statistically they should - we don't want a dark xmas tree! To run the dmx we just calculate the values for each channel and put them in the DMX array. The dmx "tick" runs in the background and keeps everything running. Running this code, shows that our sin squared approximation works pretty well, at least for our small test installation, using a small DMX par... It might need some tweaking on the final tree!


So now we put all that together, so the Linux side pulls down the values, the AVR gets those via Bridge, and outputs them to DMX.  The DMX goes to a 4 channel dimmer pack, and that controls three separate strings of lights on the three... Simples!

Final tweaks were to change the authentication to use the appropriate account credentials, and search for the keywords that we'd been asked to pick out. The DMX shield needed some stackable headers so that it would clear the Yun's usb port, and the Sniff Bridge code needed a bit of a tweak so that it would boot properly on power up.

The final hurdle was a new bridgeSystem device that allows you to run code on the Yun Linux side, from the AVR side. This is built into Arduino bridge, but needed to be added to the Sniff Bridge.

With all that in place we're ready for T-Day...

T Day

The tree arrives on Saturday, and we've been promised 3 mains plugs, which we can dim. I've ordered the necessary adapter cables, to they'll plug straight in the dimmer, so in principle it should all plug in and go...

We started arrived as the Xmas decorators (really that's the name of the company!), were finishing up. The lights plugged into the dimmer, and worked fine on a simple chase sequence. On some of the other settings there was some serious flickering, but then I realised I'd configed the dimmer pack incorrectly, changed some settings and the three strands of lights all started to work perfectly! Plugging the DMX into the Arduino, and again they did exactly what they were supposed to... It doesn't hurt that I've had this thing running tests at home for the last week!

The final hurdle was simply to connect the Yun to the Uni network. Unfortunately the nearest ethernet point is 20m away, and I only brought a 2m cable. I was expecting that, so had planned to use WiFi, but the Uni uses WPA2-Enterprise, which isn't supported on the Yun (it's apparently possible, but not what we want to be setting up under pressure), so we're going to have to use Ethernet after all.
I've put in a call to get IT and estates to install an ethernet cable (and some power - there's no power cables there either, so we were testing using a dodgey extension lead). Hopefully that will go in tomorrow morning, and we'll be all ready for tomorrows big launch!

1st December

I get into Uni, and immediately bump into Nat... IT claim they don't have 20m of Ethernet cable(!!) so he's off to the shops to buy one!! He returns 45 minutes later with a 25m cable, we plug it in and...

At this point should I note that Marketing don't appreciate the benefits of a "soft" launch... this tree is advertised all over campus, and there are banners up either side of the tree explaining what its supposed to do... worse yet, one of the banners has my name on it in pretty large letters! No pressure then...

As I was saying, we plug it on and..

It does exactly what its supposed to!!!!!!! All you have to do is tweet #BUcourses, #BUsupport or #BUc... WHAT!!! 

I've got an email with a very clear attatchment which makes no mention of BUsupport. It uses a completely different hashtag to drive the white lights!!! Fortunately I can make  a last minute change to the Linux side shell script which maps #BUsupport to the old Bridge key, and we're back on track.

And after a week of running round we can finally relax! Everyone's really happy with the results, which is a big relief. I'm generally not a big fan of corporate back slapping, but this has been a great fun project, and I can't wait to see what we can do next!

I'll be posting all the code in the next week or so, as it needs some tweaks to the Sniff device code - if you need it before then, drop me an email or contact me via twitter.

Saturday, 22 November 2014

More Weather - there's a lot of it about!

Build a machine to measure wind speed said the teacher... OK. In that case I'm going to make my machine using this anemometer I just happen to have in one of the many boxes of "interesting components" that are lying around Sniff labs. Of all places it came from Maplin (kind of a UK Radio Shack), who typically charge twice as much as anyone else for stuff, but in this case they had this as a spare part for their weather stations, currently retailing for £2.49. That's not a typo - stupid cheap. They also have  a rain guage for £4.99.

It has a simple switch so that once per revolution it closes a switch. Connecting one side of the switch to ground, we wire the other side to an Arduino, with a pull up to 5v.  Of course I did this using the preferred 3pin dupont header, so it plugs straight into a sensor shield!

From there it seemed pretty obvious to use an i2c lcd display (my current favourite - use it in every project device). I powered it from a USB mains charger, and put the wires through the window, out on to the balcony where the anemometer was installed.

make anenometer digital input 2
make count number
when start
..wait until anenometer
..wait 0.01 secs
..change count by 1
..wait until not anenometer
..wait 0.01 secs

The code to actually process the input is nice and simple - wait for the switch to go high, increment the count, wait for it to go low again. There's a couple of delays in there incase the switch bounces. I've not fine tuned them but it works pretty solidly.

make i2c device
make display lcdi2c device
make message string

make fastest number

when start
.set fastest to 0
..set count to 0
..wait 4 secs
..set count to count / 4
..if count > fastest
...set fastest to count
..tell display to "clear"
..set message to join "now:"[count]
..tell display to "show"
..set message to join "max:" [fastest]

..tell display to "show"

As we're only measuring whole revolutions, we average the speed over 4 seconds. That seemed a good compromise - to short a measuring period, and we wouldn't be as accurate, but any longer and we might be averaging the speed over several gusts. It would be interesting to know if there's an official way to do this - our local "real" weather station records both average speed and maximum gust speed, so there's clearly more we could do here.

We record the maximum speed as well as displaying the current speed. This is revolutions per second. I happen to know (because I looked it up, but you could measure the circumference) that one revolution per second is about 1.5mpg. Over the few days we've been running it the max recorded each days has been about 6.5 or 7 revolutions, which is about 10.5mph. Converting to metric thats about 16.5kph.

Checking with the official records for the last few days:

Wind speeds have been peaking at round 17kph each day!! There are a couple of spikes we missed, but our anemometer's location is on the balcony of the Sniff Mansion, which was chosen for convince and safety of installation, rather than accuracy! I'm calling that a pretty good win.

Just for fun I added a ds18 to record min and max temperature, and of course we could easily log all this to an SD card, but maybe thats for next time!

Tuesday, 11 November 2014

Ray Tracing in Sniff (on Arduino and Pi).

There are two ways of producing 3D computer graphics: Ray Tracing and Scanline. Scanline approaches draw one object at a time, while ray tracing draws on pixel at a time. For big scenes like the ones used in movies this makes Scanline more efficient as you don't need to hold millions of objects in memory - just the one you're drawing. Only in the last few years have machines got powerful enough to ray trace significant amounts of movies - almost all the Pixar stuff to date has been Scanline. However in the last few years ray tracing has finally made it to the movies, as it can produce some optical effects which are hard to do otherwise - its just slow.

However to make 3D graphics on an arduino we have a different problem - we can't hold the pixels in memory! That makes scanline pretty much impossible. However if we're prepared to wait, and keep out scene simple enough that it can fit in memory, then ray tracing is perfectly possible.

make spheres list of number
when makeScene
.repeat 10
..add pick random -15 to 15 to  spheres #X
..add pick random -15 to 15 to  spheres #Y
..add pick random 40 to 80 to  spheres #Z
..add 1*(pick random 1 to 10) to spheres #Radius
..add 0.01*(pick random 0 to 100) to spheres #red
..add 0.01*(pick random 0 to 100) to spheres #green
..add 0.01*(pick random 0 to 100) to spheres #blue

We can make a simple scene by storing the parameters to describe 10 spheres in a list. Ideally we'd use one list for each parameter (sphereX, sphereY etc) but that would use up too much memory. Storing them like this only makes one list, so it will fit on an Uno.

Our main program then becomes:

.set displayX to  0
.repeat until displayX>xRes
..set displayY to 0
..repeat until displayY>yRes
...set originX to 0
...set originY to 0
...set originZ to 0
...set dirX to displayX-(xRes/2)
...set dirY to displayY-(yRes/2)
...set dirZ to imageDist
...broadcast normalizeDir and wait
...broadcast trace and wait
...tell display to "setPixel"
...change displayY by 1..
..change displayX by 1

For each pixel in the display we work out a vector dir[XYZ], that a ray from the camera (at origin[XYZ]) would travel along. We set the z component to represent how far the screen is from the viewer, and adjusting that will control field of view. Then we normalise dir so that it has a length of 1 (not strictly necessary, but usually a good idea). We then call the trace script to figure out what the light along that ray will look like.

when trace
.repeat 2
..broadcast intersectScene and wait
..if bestID =0
...stop script

The important part here is that trace calls intersectScene to figure out what object the ray it. If it didn't hit anything, then it stops. If it did hit something, then we need to figure out its colour. To do that we'll need more information:

..set hitID to bestID
..set hitX to originX+bestT*dirX
..set hitY to originY+bestT*dirY
..set hitZ to originZ+bestT*dirZ
..set oX to item bestID+0 of spheres
..set oY to item bestID+1 of spheres
..set oZ to item bestID+2 of spheres
..set oR to item bestID+3 of spheres
..set nX to (hitX-oX)/oR
..set nY to (hitY-oY)/oR
..set nZ to (hitZ-oZ)/oR
..set nDotI to -1*(nX*dirX+nY*dirY+nZ*dirZ)
..set vVecX to -1*dirX
..set vVecY to -1*dirY
..set vVecZ to -1*dirZ
..set refX to (dirX+2*nDotI*nX)
..set refY to (dirY+2*nDotI*nY)
..set refZ to (dirZ+2*nDotI*nZ)

instersectScene calculates bestT, which is the distance along the ray until we hit something, so we can find hit[XYZ] by moving along the ray from the origin. Now we know where we hit the sphere, we can find the surface normal (the vector pointing directly away from the surface), by finding the vector from the centre of the sphere to the hit point (and dividing by radius to normalise it).

nDotI is usefull, as it tells us to what extent the surface is facing the viewer. vVec is the vector from the hit point towards the observer, and ref[XYZ] is the mirror reflection direction.

Any light hitting the surface is going to be attenuated by the surface colour, so 
..set weightR to (item hitID+4 of spheres)*weightR
..set weightG to (item hitID+5 of spheres)*weightG
..set weightB to (item hitID+6 of spheres)*weightB

Lets assume there's a little bit of light hitting the surface randomly, just because light bounces round in the real word. We call his Ambient in computer graphics - its a bit of a bodge, but it stops black parts of the scene being completely black, and we just add a little bit of this to the pixelColor

..change pixelR by 0.1*weightR
..change pixelG by 0.1*weightG
..change pixelB by 0.1*weightB

For more advanced surface illumination we need some lights. We can store them in a list just like we did the spheres:
..repeat until lightCount > length of lights
...set lightX to item lightCount of lights
...set lightY to item lightCount+1 of lights
...set lightZ to item lightCount+2 of lights
...set lightPower to item lightCount+3 of lights
...change lightCount by 4

And now we calculate a new ray from the hit point to the light:
...set originX to hitX
...set originY to hitY
...set originZ to hitZ
...set dirX to lightX-hitX
...set dirY to lightY-hitY
...set dirZ to lightZ-hitZ

We can use that vector to calculate how much of the lights energy hits the surface:
...set lightAtten to lightPower/(dirX*dirX+dirY*dirY+dirZ*dirZ)

Imagine a sphere, with a point light source at its centre. All of the energy from the source hits the inside of the sphere. The energy s shared over the sphere's surface area. If we doubled the radius of the sphere, its surface area would increase by 4 - area is proportional to radius squared, so we get the inverse square law - lights get dimmer with the square of distance.

Now we normalise again, and calculate N.L
...broadcast normalizeDir and wait
...set nDotL to (nX*dirX+nY*dirY+nZ*dirZ)

N.L tells us if the surface is facing the light source - if its not we move on.

...if(nDotL > 0)
....broadcast intersectScene and wait
....if bestID=0
.....set hVecX to vVecX+dirX
.....set hVecY to vVecX+dirY
.....set hVecZ to vVecX+dirZ
.....set len to sqrt of (hVecX*hVecX+hVecY*hVecY+hVecZ*hVecZ)
.....set hVecX to hVecX/len
.....set hVecY to hVecY/len
.....set hVecZ to hVecZ/len
.....set nDotH to (nX*hVecX+nY*hVecY+nZ*hVecZ)
.....if nDotH>0
......set nDotH to 10^ of (10* log of nDotH)
......change pixelR by lightAtten*nDotL*weightR*nDotH
......change pixelG by lightAtten*nDotL*weightG*nDotH
......change pixelB by lightAtten*nDotL*weightB*nDotH

If the surface is  facing the light, we fire the ray into the scene, and hope it doesn't hit anything. If it did then we're in shadow. If we get this far we know we know the light actually hits the surface we we need to calculate how much is going to get reflected towards us - this is called the BRDF.

There are lots of different ways of calculating this - different surfaces have different BRDFs. It's what makes metal and and plastic look different, even when they're the same colour. Here we're using a simple metalic style BRDF.

We know the direction of the viewer and the light. To get a perfect reflection of the light towards the viewer, then surface normal would have to be exactly half way between them (angle of incidence=angle of reflection). But chances are this isn't the case. Instead the question we can ask is what would N need to be to get a perfect refection - we calculate this and call it hVec.

Now we need ask now similar are N and H? It turns out that's really easy to calculate using a dot product. A good way of thinking of the dot product is "how alike are these two vectors?". 1 means they're the same, 0 means 90 degrees apart, -1 means opposite (assuming they're normalized). So we take the dot product. Raising that to a power means that we get a value near 1 only when they're very similar. Then we use that to add some more colour to the pixel.

Having calculated the "local" illumunation - light from light sources hitting the surface, we add some "global" illumination - light which bounces of more than one surface. If we were doing this in C we might use recursion to call trace again, but its actually more efficient to just set up the new direction and go back round in a loop:
..set dirX to refX
..set dirY to refY
..set dirZ to refZ
..broadcast normalizeDir and wait

The actual intersection code is pretty simple - we just go through each sphere in turn, and check if the ray hits it. If it does, then we see if its closer than the closest hit we've found so far. We also check if its not too close to the stating point - if we're firing a ray from the surface of a sphere, we don't want to hit that sphere due to rounding errors.

As for the actually sphere intersection itself - it looks complex, but its straight out of the textbooks so I won't discuss that there.

The final interesting bit of code is in calculating the pixel colours. So far we've been adding up pixel[RGB], and we expect to have a value somewere between 0 and 1 (though values higher than 1 are totally OK too!), but in Sniff we use colour values for each channel as whole numbers between 0 and 7 - this is clunky, but means you can set rough colours quickly and easily... if we think of a better way then we'll use it. To turn our light value into a Sniff colour we use the code:
...set pixelR to round (pixelR*7+(pick random 0 to 100)*0.01-0.5)
...if pixelR<0
....set pixelR to 0
...if pixelR>7
....set pixelR to 7
(repeat for G and B)
...set displayColor to 100*pixelR
...change displayColor by 10*pixelG
...change displayColor by pixelB

We take our value and scale it to the range 0-7. Then we add a random value in the range -0.5 to.0.5, before rounding to the nearest whole value. This randomness surprisingly makes the whole image look much better, as it hides banding artefacts by turning them into noise. Statistically the error in the image is unchanged, but rather than getting blocks of pixels which are VERY wrong, we get a little noise shared evenly over the whole image, which looks MUCH nicer.

And there you have it. An Arduino ray tracer in Sniff.

As this is purely calulation diven the code works essentially unchanged on any Sniff machine. Moving the code onto a Raspberry Pi, and replacing the first line:
make display tft device
make display framebuffer device
and you get a version which works on Pi (or other Linux).

Running on Pi is about 100 times faster. Hardy surprising, as the Pi is running at 50 times the clock speed. However more importantly the Pi has an FPU - a floating point unit making the numerical calculations massively faster.

Here's the code

Sorry if I've had to gloss over a few parts of this - there's a lot of maths and physics involved, and Rendering is a pretty big topic to squeeze into one blog post (I could write a book... or two!), and Sniff isn't really the ideal language for it  - it would be much easier in a language with data structures and proper functions (though not the worst), but it was fun to try.

Hopefully I've explained most of what going on!

Saturday, 8 November 2014


Here's a quick science experiment for a wet Saturday afternoon...

When things are moving to fast to see we can take a picture of them to capture a single moment of the movement. We can do the same thing without the camera, by simply using the camera's flash. If you're in a dark-ish room, the you'll see a single bright frozen instant of a moving subject.

But what if something was spinning - like a power drill, an engine or a wheel. It looks like a blur, so its hard to see whats going on, but if we could freeze the image every time the rotation got to the same place, it would look like it was stationary. We can do that by flashing a bright light at exactly the same speed as the object is spinning, so if an engine is rotating at 100 revs per second then flashing a light at 100Hz would make let us see the motor clearly, as if it was still. If fire it at 99Hz then on each revolution the engine will get a little further around - and it would look like the object is spinning at the difference of the two speeds - once per second!

To make this happen I hooked up a could to LED's to arduino pins 2 and 3, so we could flash them. We can also use a potentiometer connected to A0 to adjust the speed. A few lines of Sniff:

make led1 digital output 2
make led2 digital output 3
make pot analog input A0

make delay number
when start
..set delay to pot*0.2
..wait delay secs
..set led1 to on
..set led2 to on
..wait 0.005 secs
..set led1 to off

..set led2 to off

And we've got a stroboscope!
Here we've got a lego cog held in an electric drill. It's spinning fast enough that it would normally look like a blurred disk, but with the arduino slowing the movement down it looks as if its spinning slowly. Update: Try stepping through the video a frame at a time... The video frame rate is faster than the strobe, so you can see the light flashing on and off, capturing the same part of the rotation each time, while the other parts of the cycle are in darkness!!!

Adjusting the pot to control the delay changes the apparent speed of the motion. You can also play with the 0.005 second delay - making it longer makes everything brighter, but if its too long then it will look blurry.

The code could easily be developed so you could display and/or set the rate of flashing exactly, so you could measure how fast the drill is actually spinning...

Friday, 7 November 2014

Release 12: Sniffpad -the Sniff IDE, written in Sniff!

Sniff's come a long way really fast since we initially started developing it, but two really important things things have been on the TODO list since day 1. Getting a version running on Windows as something we knew was important, but getting the resources to develop it held things back until last month when we finally released version 11 for windows.

We're excited now to be able to be able to tick off the other big outstanding feature: Sniff now has an IDE! Getting an IDE running was tricky because, while writing command line code that will run on Mac, Linux and Pi is relatively easy (Windows is a bit harder, hence delaying Win32 Sniff), every platform has its own way of drawing on the screen. It's really hard to write a program which draws on the screen which runs without modification on lots of platforms. There are tools to make it easier, but they're not ideal, and often require the end user to install libraries and the like.

We wanted something that was simple, lightweight, and ran on all the platforms that Sniff ran on... We were stuck. Then last month as part of the win32 work, we wrote a Sniff device that could open a Window. Originally it was intended to be just an alternative to the Linux frame buffer device, but once it was working on win32 and X11, there was a lightbulb moment...

We could write the Sniff IDE in Sniff!

It was perfect - by definition Sniff runs on the platforms we want to run the IDE on! Not only that, but it would demonstrate that Sniff was actually pretty powerful, and could do "real" programming. Developing a program like that would shake bugs out of the system (we found a couple of bugs, and tightened a few other things up), and best of all once we released it, if you don't like it you can add feature yourself, because its WRITTEN IN SNIFF!

And here it is! It runs identically on Windows and Linix (including Pi). It will also run on Mac, but requires X11 to be installed, which isn't ideal - we're looking at how we can fix that!

Install and setup Sniff as usual, then cd to the directory you're using to keep the code in, and type "sniffpad", and off you go.

On the toolbar on the top are buttons to load and save. These bring up a dialog panel, where you can type in a new filename. You can also use up/down cursor keys here to scroll through existing files, which turns out to be pretty neat.

To run your code on the computer, first you need to compile it using the compile button, then run it with the run button. If "run" doesn't work on a Linux machine then check you have "xterm" installed. It's pretty standard so you should be able to install it using "apt-get" or "yum" if you don't have it already.

If you want to work on Arduino, you can use the next button to complile/download. The terminal opens a companion program "sniffterm", which talks to the arduino via the serial port. This is handy even if you don't use sniffpad.

Finally you can quit, which asks you if you want to save first.

That's really all there is to it! It's pretty basic, but its not really intended to be a full blown IDE - its for writing simple programs and getting them running - most of the Sniff examples are less that 30 lines of code, so fit on a single screen. Unlike the Arduino IDE we don't intend this to be the main dev tool for everyone - Sniff is command line based, and this is a layer on top. Use whichever works for you (and if you want to, use Eclipse, or Xcode to edit Sniff code!). On the other hand if you think there's something missing (copy and paste is the on thing we will be adding soon), then you can load the source for sniffpad up in sniffpad itself, and make it better!

As before Release 12 comes in 2 flavours:
Win32 Sniff includes only windows files, and uses DOS cr/lf
Generic Sniff includes all platforms with Unix style text files

Monday, 3 November 2014

Spirit of Radio: RF24 wireless comms!

There are a bunch of ways of communicating with an Arduino running Sniff. You could use ethernet, but you'd need cables. You could use a RC transmitter or IR, but but they're only one way. Wifi on Arduino is expensive. It's never a one size fits all - something cheap, fast, simple and bi-directional, so that for example I could set up the weather station at the bottom of the garden, and collect the results in the house...

Enter the RF24. These amazing little radio transceivers cost about $1 on eBay, and contain everything you need to do some pretty nifty communications. The only downside that was holding them back was that they need 7 wires to connect them to an arduino - not a problem in principle, but a pain to hook up two or three of them for me to test. Then I found the Funduino Joystick shield - about $5 from the Asian electronics online superstore of your choice. For some reason that's not clear these have an RF24 socket!?! Odd if you want a joystick, but really handy if you need to hook up several RF24's...

With the boards aquired and RF24's plugged in we're good to go. I set up two boards: one called server which listens for a message, and sends a reply back (just confirming all was well), and the other as a client which sends a message when a button is pressed.

Here's the client first:
make spi device
make transmitter rf24 device
make message string
make radioChannel number

make buttonA digital input 2

make receivedMessage string
when start
.set radioChannel to 2
.tell transmitter to "setReceiveChannel"
..tell transmitter to "readString"
..if not message = ""
...set receivedMessage to message
...say receivedMessage

Here's the first part of the program where we set up the RF24 device and tell it to listen on channel 2. Internally rf24's support multiple frequencies, and allow multiple senders and receivers to share the same channel without their messages getting mixed, but for Sniff we simplify that - when you listen on channel 2, you will receive all the messages sent on channel 2.

Having set the channel we go into a loop, and tell the transmitter to try and read the a string. If it does, then it makes a copy, and prints out the received string.

Transmitting is just as easy:
make messageToSend string
when start
.set radioChannel to 1
.tell transmitter to "setTransmitChannel"
..if not buttonA
...set messageToSend to [ timer ]
...set message to messageToSend
...tell transmitter to "writeString"
...say messageToSend
...wait until buttonA

We select channel 1 as the transmit channel, then wait for the user to press one of the buttons on the Funduino JS shield (buttons always come in handy). When the button is pressed we copy create a messageToSend, and assign it to the variable message, which we transmit using the writeString command. Then we print out the message and wait until the user stops pressing the button. The messages you send are limited to 32 characters, so keep your messages short.

All that messing around copying message to and from the other strings is because we have two scripts changing message at the same time - If one sets it to something, then the other changes it, then the first script might get confused. The way Scratch and Sniff handle this means that normally we don't have to worry about it, but occasionally it can trip you up, so to be safe we've created and displayed the message using a different variable.

Setting up the server is easier:
make spi device
make transmitter rf24 device
make message string
make radioChannel number

when start
.set radioChannel to 1
.tell transmitter to "setReceiveChannel"
.set radioChannel to 2
.tell transmitter to "setTransmitChannel"
..tell transmitter to "readString"
..if not message = ""
...say message
...set message to join "echo: " message
...#The other end has just finished transmitting
...#Give it 20mS to start listening again
...wait 20 millisecs
...tell transmitter to "writeString"

We set up the send and receive channels (noting we're now listening on 1 and sending on 2). We wait until we receive a message, we add the word "echo" on the begining and then send it back.

The only gotcha here is that strictly the rf24 can't be both a transmitter and a receiver... but it can switch back and forth pretty quick. Normally we set it up to listen, but when it needs to transmit it has to stop listening for a while, send the message, the switch back to listening. In this case the other end has just sent us a message, so at the instant we receive it, the transmitter is probably, desperately trying to get back into listening mode asap. It we send a message straight back, then it might not be ready, so we wait just a little while to give it a chance to get ready for us.

When you press the button on the client, it will send the current timer value to the server, which prints it out. The server then sends an acknowledge "echo" back to the client.

You can get more fancy and have multiple clients. They can both send messages to the server, and because both are listening on the same channel both see the echo replies. You could experiment with different receive channels for each client to avoid this, or simply add something into the echo, so the client can see if its intended for it.

And that's it - I'm sure we'll have lots more fun with these now that we've got device support and an easy way to hook them up.

Tuesday, 28 October 2014

Release 11: Sniff for Windows

One of the biggest issues in getting Sniff out to a wider audience has been that until now Sniff has been Unix only: it runs on Mac/Linux and Pi. While we recognise that most schools are running Windows. It's not that we didn't see that as a problem, but rather there were several very pragmatic reasons for developing on Unix over Windows:

  • Portability: Sniff runs essentially unchanged on all Unix platforms
  • Developer Tool: Unix is just better set up to develop an system like this. It's easier for us, but also easier for users: Apple/Pi/Linux provide all the pieces users need, whereas MS don't.
  • Developer Skills: We like Unix. It's a better platform, so we develop for that first.
However we've finally found the resources to develop a Windows build. Release 11 includes initial support for Windows. In fact R11 comes in two favours: Generic and Windows. The generic version runs on all platforms, including Windows but if all the text files are all in "Unix" format, which may cause problems (If you've worked on cross platform systems before you'll understand), so we've also made a Windows only release which will work better, but only includes the Windows components.

On all platforms you need a C compiler, but this is trickier on Windows that Unix platforms, as its not included by default (and MS charge for their Visual Studio, while Apple give away Xcode for free!). However we can use MinGW. This is a Windows version of the tools usually used to develop for Linux. You'll need to download it from and install it, along with the included Msys tools. Once you've done this you're read to go!

Download the Sniff Windows version, and unzip it in the MSys home folder (something like c:\mingw\msys\1.0\home\username). This is an odd place to work but it's how MinGW likes to do things, then open an MSys Shell (as you set it up when you installed MinGW), and typing "ls" should show you the Sniff folder. "cd Sniff", then type ". ./setup" just as you would on any other platform to get everything setup!

From there on everything's the same. Use the MSys shell to compile and run the examples using the same commands you'd use in Unix (cd examples/Generic; sniff hello.sniff; ./hello). Arduino works (using uno-sniff). Most of the examples that previously were in the Unix folder are now in a folder called Hosted, as they run fine on Win32.

Once you've compiled a program with Sniff for Windows, its a regular Windows program so you can run it from the DOS CMD prompt, or even double click on it to make it go!

As this is the first release for Windows, there are likely to be a few glitches and problems, simply because the platform is different (and and of course there are different versions of Windows), so any feedback is appreciated.

Saturday, 25 October 2014

Arduino phone home

Every month or so someone gets 5 minutes of internet fame claiming they just invented their own smart phone using a raspberry pi and a few hundred pounds in parts... Trouble is that's a lot of money for something that's actually not a very good smart phone. I'm far more interested in building a dumb phone. For about £30 you can build a phone based on an Arduino. If you wanted to you could add a touch screen, but its far more fun to think about what you could do with a phone which has no screen at all, but is attached to one of your projects: Hook it into a temperature sensor, and it will phone you when your greenhouse gets to cold for your tomatoes. Or use a humidity/water depth gauge and get it to send you a text to let you know your garage is flooding...

To make a phone you basically need a GSM shield. These used to be quite expensive, but the price has fallen a lot, and you can get one for about £20 now. There are some variations, but most of them are based around the SIM900 chipset. The main difference between them is that they may use different pins: The Sniff device is set up to use pin8 to control power on, 9 reset, 2 receive and 3 transmit. These are easily changed in lib/Xphone.c if you have a slightly different model.

I also hooked up an i2c lcd display, so I could see what's going on. We can set up the drivers for the phone and the display in Sniff as:

make i2c device
make display lcdi2c device 4
make message string
make displayFlush boolean
make phone device

make phoneNumber string

I was using a 4 line LCD - hence the 4 config parameter to the lcdi2c device.

Then you start up the first thing that happens is that the system tries to automatically turn then the phone on, and connect to the network. This can take a few seconds, so we start by printing a message and then waiting for the phone to tell us its ready:

when start
.set message to "Sniff Phone"
.tell display to "show"
.repeat until message="ready"
..wait 0.5 secs
..tell phone to "checkStatus"
.tell display to "show"

Once "checkStatus" returns the "ready" message we're good do go. From here on in there are basically 4 things we might want to do: answer a phone call, make a phone call, send an SMS or receive an SMS. The GSM board also supports mobile data, but that at least in the UK that can get expensive if you're not on the right contract. By contrast for £5/month, I can send and receive unlimited texts, receive calls, and make unlimited calls to phones on the same network. If I only receive calls and texts, and may send a few texts, than I can probably make that £5 last all year.

The easiest thing to do is send and SMS:
.set phoneNumber to "+441234111222"
.set message to "Sent from my Arduino"
.tell phone to "sendSMS"
.tell display to "show"

We just set the number and the message, and the tell the phone to "sendSMS". If all goes well then the variable message should be set to "Sent", or if there's a problem it will be set to "Failed".

Calling a number is almost as easy:
.set phoneNumber to "+441234111222"
.set message to join "Calling:" phoneNumber
.tell display to "show"
.tell phone to "startCall"

You can plug a headset into the GSM shield, so you can chat to someone! One project we've discussed is for a someone we know who is disabled, and unable to dial a phone: It would be trivial to  make a custom phone which dials their emergency contact, by pressing a single large button.

During a call you can check its status by calling:
.tell phone to "checkStatus"
.tell display to "show"
.set message to join "busy:" phoneNumber
.tell display to "show"
This also works for incoming calls too.

When you're done just hang up:
.tell phone to "endCall"

Receiving calls and texts is a bit harder, as they can arrive at any time, so you need to check for them:
..broadcast checkSMS and wait
..wait 0.5 secs

To actually check the message:
when checkSMS
.tell phone to "checkSMS"
.if not message = ""
..tell display to "show"
..set message to join "from:" phoneNumber
..tell display to "show"
..tell phone to "deleteSMS"

We try to receive the message by telling the phone to "checkSMS". which will fill in the message, and phoneNumber (though note that on AVR's Sniff strings are limited to 128 characters which is shorted than the longest possible message - there's simply not enough memory on an Arduino to hold long strings). Here we print out the message, and number then delete the SMS. If you don't delete it, it will be there next time you check.

You could use the contents of the message to do pretty much anything - for example you could send a text to the arduino, and use its contents to change the colour of your neoPixel christmas tree lights (When I show this code in CPD groups with teachers I also point out it could be used to do unpleasant stuff that the chemistry teacher could help them with, but its probably not in my best interests to make that joke online...).

The final think we can do as answer a phone call. These have to be handled slightly differently to texts, as while texts can always be stored for later calls pretty much need to be handled straight away.

.repeat until not message = "ready"
..wait 0.5 secs
..tell phone to "checkStatus"
..if message="ringing"
...set message to "Incoming Call"
...tell display to "show"
...set message to join "From:"phoneNumber
...tell display to "show"
...tell phone to "answerCall"

When we call "checkStatus" it will return "ringing" if there's an incoming call. We can also get the caller ID. If we want to answer the just tell the phone to "answerCall".

We can wait for the phone to either stop ringing, or for the call to end (busy status) by waiting for the status to go back to "ready":

...repeat until message="ready"
....tell phone to "checkStatus"
....wait 0.5 secs
...set message to "call ended"
...tell display to "show"

And that's all the bits you need to integrate phone calls and texts into you app. All of these are demonstrated in the sPhone.sniff example included in the current Sniff release (along with the necessary device files). sPhone isn't a complete phone, but rather all the code here broken into parts you can use for yourself, and a general demo of the phone functions.

Lets put them together some simple code that waits for a call, and then sends  a text back to that number back. You could use this to remotely check the status of an experiment, but it only sends you data when you (or anyone else) asks for it:

make lastPhoneNumber string

..tell phone to "checkStatus"
..repeat until message = "ringing"
...wait 0.5 secs
...tell phone to "checkStatus"
..set lastPhoneNumber to phoneNumber
..repeat until message = "ready"
...wait 0.5 secs
...tell phone to "checkStatus"
..set phoneNumber to lastPhoneNumber
..set message to "hello from phone"
..tell phone to "sendSMS"

Lets assume you've set everything up as before. Now we wait for the status becomes "ringing" to indicate that there's call. This also sets the phoneNumber variable, which we copy to lastPhoneNumber (strictly we don't need to do this, but its safer and clearer). Now just wait for the call to go away, and then send out a text containing whatever data you think the user wanted.

One final obligatory comment: Don't make an auto-dialler which calls random people. Only use it with your own phones, or you could make a lot of people unhappy.