Dec 27, 2011
Night #7: Nature of Code excerpts
For tonight’s post, I’m going to include three new examples from my upcoming Nature of Code book. I’ll also excerpt some of the text with these examples below.
This first example expands on the existing Recursive Tree example that comes with Processing.
Chapter 8: Recursion and Fractals
The recursive tree fractal is a nice example of a scenario in which adding a little bit of randomness can make the tree look more natural. Take a look outside and you’ll notice that branch lengths and angles vary from branch to branch, not to mention the fact that branches don’t all have exactly the same number of smaller branches. First, let’s see what happens when we simply vary the angle and length. This is a pretty easy one, given that we can just ask Processing for a random number each time we draw the tree.
In the above function, we always call branch() twice. But why not pick a random number of branches and call branch() that number of times?
The example below takes the above a few steps further. It uses Perlin noise to generate the angles, as well as animate them. In addition, it draws each branch with a thickness according to its level and sometimes shrinks a branch by a factor of two to vary where the levels begin.
Next up an excerpt from the Genetic Algorithm chapter.
Chapter 9: Evolution and Code
In 2009, Jer Thorp released a great genetic algorithms example on his blog entitled “Smart Rockets.” Jer points out that NASA uses evolutionary computing techniques to solve all sorts of problems, from satellite antenna design to rocket firing patterns. This inspired him to create a Flash demonstration of evolving rockets. Here is a description of the scenario:
A population of rockets launches from the bottom of the screen with the goal of hitting a target at the top of the screen (with obstacles blocking a straight line path).
Each rocket is equipped with five thrusters of variable strength and direction. The thrusters don’t fire all at once and continuously; rather, they fire one at a time in a custom sequence. In this example, we’re going to evolve our own simplified Smart Rockets, inspired by Jer Thorp’s. You can leave implementing some of Jer’s additional advanced features as an exercise.
Our rockets will have only one thruster, and this thruster will be able to fire in any direction with any strength in every single frame of animation. This isn't particularly realistic, but it will make building out the framework a little easier. (We can always make the rocket and its thrusters more advanced and realistic later.)
Source: SmartRockets.zip
And here's a short excerpt from the beginning of the chapter on neural networks, as well as the example that closes out the chapter demonstrating how to visualize the flow of information through a network.
Chapter 10: The Brain
Computer scientists have long been inspired by the human brain. In 1943, Warren S. McCulloch, a neuroscientist, and Walter Pitts, a logician, developed the first conceptual model of an artificial neural network. In their paper, "A logical calculus of the ideas imminent in nervous activity,” they describe the concept of a neuron, a single cell living in a network of cells that receives inputs, processes those inputs, and generates an output.
Their work, and the work of many scientists and researchers that followed, was not meant to accurately describe how the biological brain works. Rather, an artificial neural network (which we will now simply refer to as a “neural network”) was designed as a computational model based on the brain that can solve certain kinds of problems.
It’s probably pretty obvious to you that there are certain problems that are incredibly simple for a computer to solve, but difficult for you. Take the square root of 964,324, for example. A quick line of code produces the value 982, a number Processing computed in less than a millisecond. There are, on the other hand, problems that are incredibly simple for you or me to solve, but not so easy for a computer. Show any toddler a picture of a kitten or puppy and they’ll be able to tell you very quickly which one is which. Say hello and shake my hand one morning and you should be able to pick me out of a crowd of people the next day. But need a machine to perform one of these tasks? People have already spent careers researching and implementing complex solutions.
The most common application of neural networks in computing today is to perform one of these easy-for-a-human, difficult-for-a-machine” tasks, often referred to as pattern classification. Applications range from optical character recognition (turning printed or handwritten scans into digital text) to facial recognition. We don’t have the time or need to use some of these more elaborate artificial intelligence algorithms here, but if you are interested in researching neural networks, I’d recommend the books Artificial Intelligence: A Modern Approach by Stuart J. Russell and Peter Norvig and AI for Game Developers by David M. Bourg and Glenn Seemann.
In this chapter, we’ll instead begin with a conceptual overview of the properties and features of neural networks and build the simplest example possible of one (a network that consists of a singular neuron). Afterwards, we’ll examine strategies for building a “Brain” object that can be inserted into our Vehicle class and used to determine steering. Finally, we’ll also look at techniques for visualizing and animating a network of neurons.
Network network;
void setup() { size(590, 360); smooth();
// Create the Network object network = new Network(width/2, height/2);
// Create a bunch of Neurons Neuron a = new Neuron(-300, 0); Neuron b = new Neuron(-200, 0); Neuron c = new Neuron(0, 100); Neuron d = new Neuron(0, -100); Neuron e = new Neuron(200, 0); Neuron f = new Neuron(300, 0);
// Connect them network.connect(a, b,1); network.connect(b, c,random(1)); network.connect(b, d,random(1)); network.connect(c, e,random(1)); network.connect(d, e,random(1)); network.connect(e, f,1);
// Add them to the Network network.addNeuron(a); network.addNeuron(b); network.addNeuron(c); network.addNeuron(d); network.addNeuron(e); network.addNeuron(f); }
void draw() { background(255); // Update and display the Network network.update(); network.display();
// Every 30 frames feed in an input if (frameCount % 30 == 0) { network.feedforward(random(1)); } }
class Connection { // Connection is from Neuron A to B Neuron a; Neuron b;
// Connection has a weight float weight;
// Variables to track the animation boolean sending = false; PVector sender;
// Need to store the output for when its time to pass along float output = 0;
Connection(Neuron from, Neuron to, float w) { weight = w; a = from; b = to; }
// The Connection is active void feedforward(float val) { output = val*weight; // Compute output sender = a.location.get(); // Start animation at Neuron A sending = true; // Turn on sending }
// Update traveling sender void update() { if (sending) { // Use a simple interpolation sender.x = lerp(sender.x, b.location.x, 0.1); sender.y = lerp(sender.y, b.location.y, 0.1); float d = PVector.dist(sender, b.location); // If we've reached the end if (d < 1) { // Pass along the output! b.feedforward(output); sending = false; } } }
// Draw line and traveling circle void display() { stroke(0); strokeWeight(1+weight*4); line(a.location.x, a.location.y, b.location.x, b.location.y);
if (sending) { fill(0); strokeWeight(1); ellipse(sender.x, sender.y, 16, 16); } } }
class Network {
// The Network has a list of neurons ArrayList neurons;
// The Network now keeps a duplicate list of all Connection objects. // This makes it easier to draw everything in this class ArrayList connections; PVector location;
Network(float x, float y) { location = new PVector(x, y); neurons = new ArrayList(); connections = new ArrayList(); }
// We can add a Neuron void addNeuron(Neuron n) { neurons.add(n); }
// We can connection two Neurons void connect(Neuron a, Neuron b, float weight) { Connection c = new Connection(a, b, weight); a.addConnection(c); // Also add the Connection here connections.add(c); }
// Sending an input to the first Neuron // We should do something better to track multiple inputs void feedforward(float input) { Neuron start = (Neuron) neurons.get(0); start.feedforward(input); }
// Update the animation void update() { for (int i = 0; i < connections.size(); i++) { Connection c = (Connection) connections.get(i); c.update(); } }
// Draw everything void display() { pushMatrix(); translate(location.x, location.y); for (int i = 0; i < neurons.size(); i++) { Neuron n = (Neuron) neurons.get(i); n.display(); }
for (int i = 0; i < connections.size(); i++) { Connection c = (Connection) connections.get(i); c.display(); } popMatrix(); } }
// An animated drawing of a Neural Network
// Daniel Shiffman
class Neuron { // Neuron has a location PVector location;
// Neuron has a list of connections ArrayList connections;
// We now track the inputs and sum them float sum = 0;
// The Neuron's size can be animated float r = 32;
Neuron(float x, float y) { location = new PVector(x, y); connections = new ArrayList(); }
// Add a Connection void addConnection(Connection c) { connections.add(c); }
// Receive an input void feedforward(float input) { // Accumulate it sum += input; // Activate it? if (sum > 1) { fire(); sum = 0; // Reset the sum to 0 if it fires } }
// The Neuron fires void fire() { r = 64; // It suddenly is bigger
// We send the output through all connections for (int i = 0; i < connections.size(); i++) { Connection c = (Connection) connections.get(i);
c.feedforward(sum); } }
// Draw it as a circle void display() { stroke(0); strokeWeight(1); // Brightness is mapped to sum float b = map(sum, 0, 1, 255, 0); fill(b); ellipse(location.x, location.y, r, r);
// Size shrinks down back to original dimensions r = lerp(r, 32, 0.1); } }
Source: NetworkAnimation.zip</script>