There are two ways of producing 3D computer graphics: Ray Tracing and Scanline. Scanline approaches draw one object at a time, while ray tracing draws on pixel at a time. For

*big*scenes like the ones used in movies this makes Scanline more efficient as you don't need to hold millions of objects in memory - just the one you're drawing. Only in the last few years have machines got powerful enough to ray trace significant amounts of movies - almost all the Pixar stuff to date has been Scanline. However in the last few years ray tracing has finally made it to the movies, as it can produce some optical effects which are hard to do otherwise - its just*slow.*
However to make 3D graphics on an arduino we have a different problem - we can't hold the

*pixels*in memory! That makes scanline pretty much impossible. However if we're prepared to wait, and keep out scene simple enough that it can fit in memory, then ray tracing is perfectly possible.make spheres list of number

when makeScene

.repeat 10

..add pick random -15 to 15 to spheres #X

..add pick random -15 to 15 to spheres #Y

..add pick random 40 to 80 to spheres #Z

..add 1*(pick random 1 to 10) to spheres #Radius

..add 0.01*(pick random 0 to 100) to spheres #red

..add 0.01*(pick random 0 to 100) to spheres #green

..add 0.01*(pick random 0 to 100) to spheres #blue

We can make a simple scene by storing the parameters to describe 10 spheres in a list. Ideally we'd use one list for each parameter (sphereX, sphereY etc) but that would use up too much memory. Storing them like this only makes one list, so it will fit on an Uno.

Our main program then becomes:

.set displayX to 0

.repeat until displayX>xRes

..set displayY to 0

..repeat until displayY>yRes

...set originX to 0

...set originY to 0

...set originZ to 0

...set dirX to displayX-(xRes/2)

...set dirY to displayY-(yRes/2)

...set dirZ to imageDist

...broadcast normalizeDir and wait

...

...broadcast trace and wait

...

...tell display to "setPixel"

...change displayY by 1..

..change displayX by 1

For each pixel in the display we work out a vector dir[XYZ], that a ray from the camera (at origin[XYZ]) would travel along. We set the z component to represent how far the screen is from the viewer, and adjusting that will control field of view. Then we normalise dir so that it has a length of 1 (not strictly necessary, but usually a good idea). We then call the trace script to figure out what the light along that ray will look like.

when trace

.repeat 2

..broadcast intersectScene and wait

..if bestID =0

...stop script

..

The important part here is that trace calls intersectScene to figure out what object the ray it. If it didn't hit anything, then it stops. If it

*did*hit something, then we need to figure out its colour. To do that we'll need more information:
..set hitID to bestID

..set hitX to originX+bestT*dirX

..set hitY to originY+bestT*dirY

..set hitZ to originZ+bestT*dirZ

..set oX to item bestID+0 of spheres

..set oY to item bestID+1 of spheres

..set oZ to item bestID+2 of spheres

..set oR to item bestID+3 of spheres

..set nX to (hitX-oX)/oR

..set nY to (hitY-oY)/oR

..set nZ to (hitZ-oZ)/oR

..set nDotI to -1*(nX*dirX+nY*dirY+nZ*dirZ)

..set vVecX to -1*dirX

..set vVecY to -1*dirY

..set vVecZ to -1*dirZ

..set refX to (dirX+2*nDotI*nX)

..set refY to (dirY+2*nDotI*nY)

..set refZ to (dirZ+2*nDotI*nZ)

instersectScene calculates bestT, which is the distance along the ray until we hit something, so we can find hit[XYZ] by moving along the ray from the origin. Now we know where we hit the sphere, we can find the surface normal (the vector pointing directly away from the surface), by finding the vector from the centre of the sphere to the hit point (and dividing by radius to normalise it).

nDotI is usefull, as it tells us to what extent the surface is facing the viewer. vVec is the vector from the hit point towards the observer, and ref[XYZ] is the mirror reflection direction.

Any light hitting the surface is going to be attenuated by the surface colour, so

..set weightR to (item hitID+4 of spheres)*weightR

..set weightG to (item hitID+5 of spheres)*weightG

..set weightB to (item hitID+6 of spheres)*weightB

Lets assume there's a little bit of light hitting the surface randomly, just because light bounces round in the real word. We call his Ambient in computer graphics - its a bit of a bodge, but it stops black parts of the scene being

*completely black,*and we just add a little bit of this to the pixelColor
..change pixelR by 0.1*weightR

..change pixelG by 0.1*weightG

..change pixelB by 0.1*weightB

For more advanced surface illumination we need some lights. We can store them in a list just like we did the spheres:

..repeat until lightCount > length of lights

...set lightX to item lightCount of lights

...set lightY to item lightCount+1 of lights

...set lightZ to item lightCount+2 of lights

...set lightPower to item lightCount+3 of lights

...change lightCount by 4

...

And now we calculate a new ray from the hit point to the light:

...set originX to hitX

...set originY to hitY

...set originZ to hitZ

...set dirX to lightX-hitX

...set dirY to lightY-hitY

...set dirZ to lightZ-hitZ

We can use that vector to calculate how much of the lights energy hits the surface:

...set lightAtten to lightPower/(dirX*dirX+dirY*dirY+dirZ*dirZ)

Imagine a sphere, with a point light source at its centre. All of the energy from the source hits the inside of the sphere. The energy s shared over the sphere's surface area. If we doubled the radius of the sphere, its surface area would increase by 4 - area is proportional to radius squared, so we get the inverse square law - lights get dimmer with the square of distance.

Now we normalise again, and calculate N.L

...broadcast normalizeDir and wait

...set nDotL to (nX*dirX+nY*dirY+nZ*dirZ)

N.L tells us if the surface is facing the light source - if its not we move on.

...if(nDotL > 0)

....broadcast intersectScene and wait

....if bestID=0

.....set hVecX to vVecX+dirX

.....set hVecY to vVecX+dirY

.....set hVecZ to vVecX+dirZ

.....set len to sqrt of (hVecX*hVecX+hVecY*hVecY+hVecZ*hVecZ)

.....set hVecX to hVecX/len

.....set hVecY to hVecY/len

.....set hVecZ to hVecZ/len

.....set nDotH to (nX*hVecX+nY*hVecY+nZ*hVecZ)

.....if nDotH>0

......set nDotH to 10^ of (10* log of nDotH)

......change pixelR by lightAtten*nDotL*weightR*nDotH

......change pixelG by lightAtten*nDotL*weightG*nDotH

......change pixelB by lightAtten*nDotL*weightB*nDotH

If the surface

*is*facing the light, we fire the ray into the scene, and hope it doesn't hit anything. If it did then we're in shadow. If we get this far we know we know the light actually hits the surface we we need to calculate how much is going to get reflected towards us - this is called the BRDF.
There are lots of different ways of calculating this - different surfaces have different BRDFs. It's what makes metal and and plastic look different, even when they're the same colour. Here we're using a simple metalic style BRDF.

We know the direction of the viewer and the light. To get a perfect reflection of the light towards the viewer, then surface normal would have to be exactly half way between them (angle of incidence=angle of reflection). But chances are this isn't the case. Instead the question we can ask is what

*would*N need to be to get a perfect refection - we calculate this and call it hVec.
Now we need ask now similar are N and H? It turns out that's really easy to calculate using a dot product. A good way of thinking of the dot product is "how alike are these two vectors?". 1 means they're the same, 0 means 90 degrees apart, -1 means opposite (assuming they're normalized). So we take the dot product. Raising that to a power means that we get a value near 1 only when they're very similar. Then we use that to add some more colour to the pixel.

Having calculated the "local" illumunation - light from light sources hitting the surface, we add some "global" illumination - light which bounces of more than one surface. If we were doing this in C we might use recursion to call trace again, but its actually more efficient to just set up the new direction and go back round in a loop:

..

..set dirX to refX

..set dirY to refY

..set dirZ to refZ

..broadcast normalizeDir and wait

The actual intersection code is pretty simple - we just go through each sphere in turn, and check if the ray hits it. If it does, then we see if its closer than the closest hit we've found so far. We also check if its not

*too*close to the stating point - if we're firing a ray from the surface of a sphere, we don't want to hit that sphere due to rounding errors.
As for the actually sphere intersection itself - it looks complex, but its straight out of the textbooks so I won't discuss that there.

The final interesting bit of code is in calculating the pixel colours. So far we've been adding up pixel[RGB], and we expect to have a value somewere between 0 and 1 (though values higher than 1 are totally OK too!), but in Sniff we use colour values for each channel as whole numbers between 0 and 7 - this is clunky, but means you can set rough colours quickly and easily... if we think of a better way then we'll use it. To turn our light value into a Sniff colour we use the code:

...set pixelR to round (pixelR*7+(pick random 0 to 100)*0.01-0.5)

...if pixelR<0

....set pixelR to 0

...if pixelR>7

....set pixelR to 7

*(repeat for G and B)*

...set displayColor to 100*pixelR

...change displayColor by 10*pixelG

...change displayColor by pixelB

We take our value and scale it to the range 0-7. Then we add a random value in the range -0.5 to.0.5, before rounding to the nearest whole value. This randomness surprisingly makes the whole image look much better, as it hides banding artefacts by turning them into noise. Statistically the error in the image is unchanged, but rather than getting blocks of pixels which are VERY wrong, we get a little noise shared evenly over the whole image, which looks MUCH nicer.

And there you have it. An Arduino ray tracer in Sniff.

As this is purely calulation diven the code works essentially unchanged on any Sniff machine. Moving the code onto a Raspberry Pi, and replacing the first line:

make display tft device

with

make display framebuffer device

and you get a version which works on Pi (or other Linux).

Running on Pi is about 100 times faster. Hardy surprising, as the Pi is running at 50 times the clock speed. However more importantly the Pi has an FPU - a floating point unit making the numerical calculations massively faster.

Here's the code

Sorry if I've had to gloss over a few parts of this - there's a lot of maths and physics involved, and Rendering is a pretty big topic to squeeze into one blog post (I could write a book... or two!), and Sniff isn't really the ideal language for it - it would be much easier in a language with data structures and proper functions (though not the worst), but it was fun to try.

Hopefully I've explained most of what going on!

Here's the code

Sorry if I've had to gloss over a few parts of this - there's a lot of maths and physics involved, and Rendering is a pretty big topic to squeeze into one blog post (I could write a book... or two!), and Sniff isn't really the ideal language for it - it would be much easier in a language with data structures and proper functions (though not the worst), but it was fun to try.

Hopefully I've explained most of what going on!

## No comments:

## Post a Comment