Pages

Tuesday, December 13, 2011

Automating Graph Layout

When working with graphs (the nodes and edges kind) on a computer, good arrangement and layout of the nodes is an interesting problem. To us, there is usually an obvious or at least visually appealing way to draw a graph, but a computer has no intuition (yet?) so finding a good layout for the graph can be difficult. It isn't hard to represent a graph as such in a computer, in fact things like adjacency matrices are probably much easier for a computer to read than a human, especially as you add more nodes.

There's also the problem of deciding what sort of layout is the right one for a given graph in the first place: Say you want to plot three things: A mesh network, a tree graph and a import/export relations between countries. Should each be laid out according to the same rules? Probably not - in a mesh network, we do not expect edges that reach far from one side of the graph to the other, so we can generally just place adjecent(in the sense that they are connected) nodes adjacent(location) to each other. A tree graph has a much different structure, in fact as the name implies, we would choose a tree-like layout. Import/export routes would of course be graphed according to actual geographical location, and even if not, the nodes are connected much differently. You might also want to assign a heavier weight or larger size to some nodes (according to GDP, for example).

For all of these, one can create different algorithms that find a good, or even optimal layout. Wikipedia actually has a page about some of them. 

You may remember that I briefly wrote about a project I was considering, namely an interactive Wikipedia (or other webpages, for that matter) grapher. The idea is this: you give the program a starting page, and it then represents this page as a single node. When you then click it, the page is parsed for links to other Wikipedia articles, and each is added as a node on the graph, with an edge to the original article. You can repeat this process for each new node, and edges are added whenever one article links to another one on the graph. 

I've started working on this project recently, and have found a good (but unfortunately somewhat slow - about O(n^3) ) algorithm to draw this graph. Force-based layout creates a layout by treating each edge as a spring that pulls adjacent nodes together, while simultaneously repulsing all nodes from each other using something resembling Coulomb's law, and then iterating until an equilibrium of forces is found. For now, I've only implemented a rough version of this algorithm in Python, but for the final program I'll be using C++, mainly to get better performance for the graphing, but also to gain a bit more experience with it. The drawing will probably be done in SDL, it seems simple enough to use. 

Source code of the above animation's algorithm (graph was generated randomly):

Saturday, December 10, 2011

Fun with Kinect Colorization


So I finally got around to figuring out how to use the depth and video feeds from my Kinect using Microsoft's Kinect SDK, and this is what I produced after playing around with some visualization for the depth data. First two images are simply the distance for each pixel multiplied by either 3, 7 or 1 (green, blue, red) and then taken modulo 256, which makes for some pretty cool images. The last one looks a little less spectacular, but is actually more interesting: I wrote a little algorithm to color chunks of pixels of arbitrary size based on the average distance within the block, and then I only apply this coloring to the pixels that are actually a person, while other objects and the background are simply individually colored according to distance.

I'll try and get around to actually producing something useful with this soon, but I thought this stuff was kind of fun.

By the way, this is all written in C# and also uses WPF.

Thursday, December 8, 2011

Completed Python Logic Circuit Simulator

Logic gates are interesting, with a few of them you can create circuits with fairly complex functions. Most of the time, the function of the circuit is anything but obvious from just looking at it. For example, consider this one:
That's an adder. You set the inputs A and B to True or False ( 1 or 0), and the circuit will perform a binary addition where S will give you the sum of the two, and C_out the carry bit. The carry bit can then be fed into another full adder at C_in, and you'll also have two more inputs. This can be expanded to any size one wishes.
All currently functioning computers rely on such gates. Most likely, millions and millions of them are in your cellphone.

If you're at least a little familiar with formal logic, more specifically logical connections such as "and", "or", "nor", "xor" (="exclusive or") and so on, the way these circuits work will probably make sense to you very quickly.

I've written a few posts recently about a logic gate simulator I am writing in Python, pyGates. I won't say I finished it, because there are still a few bugs that come up every now and then, but it's far enough along for others to use. Here's a screenshot of what it currently looks like:


You can create new gates by simply dragging them from the left side of the screen, and destroy them by simply dragging them back there. This is actually the only part where I still get bugs every now and then, sometimes cables don't get destroyed properly, but I'll work on fixing this. For now, you can usually work around that by manually deleting the cable - detaching it from the other gate it is connected to, and clicking into empty space.

The website for the project, a zip file with everything you need and the raw source code is located at my website here: http://home.in.tum.de/~dowling/pyGates/.

Wednesday, December 7, 2011

More pyGates



I'm making some more progress on pyGates, as you can see above. To my surprise, all the logic in the circuits worked out the first time I tested it, which is unusual for my code... Anyway, the logic for the wires has turned out to be much harder than anything else (connecting, disconnecting, reconnecting, in-sockets vs out-out sockets, etc.) I'm close to having it all work out though. Next, I'll have to add functionality to be able to delete Gates, which should be fairly easy. Finally, I'll make the cables look nicer (right now its just a fairly shitty bezier curve, which it will remain, but hopefully less shitty), and then I'll share the source code either on here or on my webpage, since it's really a bit too long to post here in full.

(By the way, the above circuit will constantly change state when you actually build it)

Project Ideas

Doing is the best way to learn, and since learning is what we're here for, let's do some stuff. I've had a number of different ideas for projects recently and not so recently, so to offer you some inspirations and perhaps for me to see if any of these still make sense when you see them typed out, I've compiled a little list of them. (Some of these are actually projects I've already started working on)
  • "Multiplayer" Paint. Basically, this would be like a chat program that lets two (or more?) people draw on the same canvas as one would in mspaint. I would implement this in C++, just because I want to learn more about it (and graphics / network programming) but also because most people don't have Python interpreters with pygame on their machine. It might be time for something different :)
  • Javascript multiplayer Paint! Use a JS canvas for client side, and save the changes to canvas stored server side (maybe use php, or node.js for serverside scripting?) Then, everyone who visits the page can draw on it for everyone else to see. Obvious downsides of this however are the fact that this is still the internet, and I suspect sooner or later (sooner) it will meet a similar fate tot that of Chatroulette
  • A 3-4 DoF Robotic arm. It doesn't have to be a very good one either, just to keep costs down. Forward and inverse kinematics are pretty interesting, and this would be a good way to learn and practice both. Servos can be picked up pretty cheaply as well, and the whole thing could be run from an Arduino - the only real difficulty in this would be the mechanical parts, I have no good way of making them properly. I got this idea from...
  • .. building a hexapod. 3DoF would be minimum for each leg to be able tog et decently smooth motion, the problem is that this would be a rather costly product (for me, at least)
  • A simple webcrawler. I don't know what you would use it for, but my ideas were perhaps a Reddit bot (find/keep track of reposts, produce statistics based on comments, find most used pun of the day, etc.), or a Wikipedia crawler (create a graph of pagelinks, those are always neat)
  • On a similar note, implementing a graphing layout algorithm - finding a good layout for graphs (graph meaning a set of nodes connected by edges) automatically. There are different ways to implement this, the spring model seems like the simplest: each edge is like a spring that pulls two connected nodes together, or pushes them apart based on their distance from each other. One could also base the pushing/pulling force on the "weight" of each node (weight could be number of incoming edges, for example)
  • implement quantitative stock trading algorithms. This is something I personally find really interesting, though I haven't worked on it much. Wikipedia has a nice page about the different algorithms that are used today. A technical problem that needs to be solved first is getting a decent sample of historical data for a given stock, as this is very often necessary for such algorithms.

Of course these all aren't new ideas, but I've found that I usually have good ideas while working on something else, and these are just inspirations for you anyway. Potentially all of these are good for learning something new!

Tuesday, December 6, 2011

MaterialMaker tool for TUM informatics

I've been meaning to try out some C# programming, so I collaborated with a friend of mine who is pretty experienced with it on a small but useful project - a tool to automatically download and display our class materials. He had already coded up the interface and execution logic during a particularly boring lecture, but it was all hardcoded to his filesystem and required the materials to already be downloaded and in the right folders. I figured it would be useful to add some functionality and let the program automatically download the latest materials and organize these in a generic directory it creates itself.

To get to the materials, I used RSS links that contained .pdfs and .zips of all the files. I wrote some parsing logic for these, which basically looks for tags and then extracts the direct link to the file. The filename is then checked with the local folders, and if it isn't already there, the file is downloaded.
Screenshot:


If you're a CS student at TUM and would like to use this tool, you can download it from my website: http://home.in.tum.de/~dowling/index.html. It shouldn't be too hard for me to add support for other classes, if you're interested just shoot me a message.

Monday, December 5, 2011

Progress on pyGates

Yesterday I wrote a short paragraph about a project I recently started, a logic gate simulator written in Python (short: pyGates). I spent a night and most of todays classes coding on it, and it's starting to get to a point where it's almost usable. I still need to implement a few more things such as switches and lightbulbs, perhaps I'll add some higher-level logic chips later on, but the basic structure is put together and the wires will also finally behave properly (in terms of connecting to the right sockets and staying there, that is).

Next up is fixing up the graphics, the main thing here is to put in the standardized symbols for the different gates, and making the wires look nice (I'll probably have them hang down as though they were loose cables, similar to logicly). Below is a screenshot of what the program currently looks like (yes, the gates look the same even though they serve different functions).

Basic stock price functions in Python

I've been reading a bit into algorithmic stock trading lately, and thought I'd try and actually simulate a few of the trades strategies. The first thing I needed was a way to get at the stock prices, I figured the easiest way might be scraping either Yahoo or Google Finance. Turns out someone else has done this before, so no need to code it up myself. I think Google Finance has been updated since that module was written, because the regex used doesn't properly fetch the actual price. All I had to do was modify the function in a few places and everything was working as expected.
A neat thing about it is that you don't need to pass the exact ticker symbol you want to get a quote for into the get_quote function, since the stirng gets passed as a query to Google Finance, which autocorrects it.

I also added a stock class, as well as long and shot sell functions. This should be a good enough start to simulate some algorithmic trading.

Sunday, December 4, 2011

pyGates

Something I've recently thought about was creating a small logic gate simulator which lets you build simple digital circuits. I've seen this elsewhere before (http://logic.ly/), with a very neat implementation but unfortunately that software now costs money. It shouldn't be too hard to brew something like this together, so I'll give this a shot over the next couple of days and see how it goes. I'll be using python for now just because I like working with it the most, but I might take this as an opportunity to learn JavaScript and host this online. I've already started working on this and have the basic structure together, all it is really is a pygame canvas with gate-objects and a few other things, but it may still take some time until it all comes together in a usable way. I'll keep you posted.

Thursday, October 27, 2011

Complete ROAMR setup

This is my robot in it's complete setup, of course I'll work on actually fixing everything in place properly, but all the hardware and such is there. I haven't been able to reliably send the Kinect data over wifi using ROS, so unfortunately the mapping and navigation algorithm isn't implemented yet, however I am controlling the robot from my laptop remotely over ROS. There are nodes running on each of the computers:
  • The controlling laptop publishes steering data (this is what you see running in the terminal at the beginning)
  • The robot in turn listens to this steering topic, and writes the appropriate signal to the Arduino onboard, which then controls the actual motors.
  • The robot also publishes the Kinect data over ROS topics, however they don't seem to get through (my network stats show that the outgoing data constantly peaks at around 230KiB/s, but no actual images get displayed on the host computer) I believe this is because of the poor wifi card built into the laptop on to of the robot, I'm currently working on getting this to work
  • The controlling laptop listens in to this data, and will, once it's working, run it through the ROS RGB-D SLAM algorithm.
Eventually, I will try to implement odometry for the robot so I can use the ROS gmapping algorithm instead, which produces 2D maps of the robots environment and better allows for navigation. i also expect the resulting maps will be much smaller in filesize - the RGB-D SLAM produced maps are often several GB large.

Here's the video:

Wednesday, October 12, 2011

Another post in the Quining series

The other day I revisited some of the scripts I wrote when messing with self-replication /-modification, and noticed that it wouldn't be hard to automate the process of turning a program into a quining one. So I decided to write a little script that does exactly that - you simply give the filename of the python script you want quined, and it will produce a new script with the source code of the old one, plus additional code that will make that script return its own source code in a new file when exectuted. Hopefully this isn't too confusing, but it was certainly at times confusing to write, even though it's a fairly short script.



The real fun starts when you execute this script on itself.. you get a quining quine. Now repeat ad-infinitum.

ROAMR update

Albeit a small one. Here's two recent videos of the robot, still remote controlled at this stage, as I'm working on getting Kinect data sent via ROS, but the high bandwidth seems to be causing problems. However, I've already got a setup where I can mount my laptop to the robot, connect the Arduino via USB and steer the whole thing from a third computer over the Wifi network using ROS.





I'm using a 12V lead-acid battery and a L298 motor driver. My motors only take 6V, so the PWM pins are running at a maximum of 50%. I'll upload the code soon.

Friday, August 26, 2011

More Quining, Recursion, and Fractals.

I worked on the short quine I posted this morning a bit, and added some stuff to it. A while ago I was playing around with adjacency matrices, and noticed that the adjacency matrix for n-dimensional hypercubes looks kind of like a fractal the higher n gets. It also turns out you can very easily create the matrix recursively from knowing just the one for dimension n=2, which looks like this:
Black represents a one, white a zero.

If we look at the matrix for n=3 (lines added for clarity):
we notice that this is simply twice the n=2 matrix (diagonally), with the other two quarters filled in by identity matrices.
This trend continues for higher dimensions:
n=4


n=5

And so on.

It seems obvious to use recursion to compute these matrices, so I thought this would be neat to implement into a program that is already self-replicating - a quine. So here's what my program does:
  • It creates a bitmap of the adjacency matrix for n=2
  • It creates and then executes new script with nearly identical source code, but this code returns the n=3 hypercube adjacency matrix, calculated recursively
  • This continues until dimension 10 (after that the recursion to compute the next matrix loops for quite long, not sure how/if I will fix this), after which the program and all that of the "children" it created exit.


if you want to try it out yourself, here's the code.








Quine time!

So this morning I was randomly reminded of quines, computer programs that take no input and return their own source code. The term was coined by Douglas Hofstadter in Gödel, Escher, Bach, referring primarily to sentence structures - simply a sentence fragment preceded by its own quotation. This can create some fun paradoxes, such as this one:

"Yields falsehood when quined" yields falsehood when quined

I decided to try to write a quine on Python, a script that returns its own source code. Here's what I came up with after a few minutes:

l= ['print "l=", l', 'for a in l:', '    print a']

print "l=", l
for a in l:
print a


This is probably one of the simplest quines there are in Python, I wouldn't be surprised if others came up with something similar before, but I just thought this was kind of a neat piece of code.

It should be pretty to follow, too. The first line holds most of the rest of the script in text form. The second line simply prints the first line, and the for-loop prints out the text defined in the first line.

Now, here is something neat: we can add code that does pretty much anything after the first print statement, and as long as we also add this code into the list in the first line, the program is still a quine. This means that this turns out to e a template for self-replicating programs - you could have this script do whatever you want, and then replicate itself.

Thursday, August 25, 2011

An update


Okay, so I've been pretty busy/on vacation lately and haven't been working on ROAMR very much, but I have made some progress which I'll talk about in a bit. First of all though, I wanted to share that I've finally setteld on where I'll be studying this fall. A while ago I had gotten accepted into Jacobs University Bremen, and I was pretty excited about it. However, they didn't offer me any scholarships so I decided not to take the offer. I was then considering KIT (Karlsruhe Institute of Technology), and was still waiting for a decision from the Technical University of Munich. I settled on TUM after getting the decision the night before my birthday. KIT and TUM are ranked pretty similarly by most sources (KIT tends to do a bit better, to be fair), but I decided on TUM because Munich is simply a much more exciting place to live, and I like their campus better. Also, they have slides in the Computer Science and Mathematics building. In addition, they appear to serve Pretzels and free beer during freshman orientation.

I believe if those slides were in the faculty for Physics, they would probably just lead right into each other.

I spent the last few days in Munich looking for an apartment (or a room, rather), and I finally found a nice one close to the campus.
I can't wait to move in, I should be getting there sometime in September.

So, now for the ROAMR project. I decided that for this sort of robot, it would be useful to implement a SLAM algorithm for mapping of the environment and autonomous navigation of the robot. The obvious sensor choice to be was a Kinect sensor, as it is fairly cheap and allows for full RGB-D mapping of the robots environment.
The downside of using a Kinect sensor is of course the fact that the robot will now become significantly bigger, and heavier. The Kinect feeds a very large amount of data, certainly too much for an Arduino to handle (not that the drivers exist anyway), so I find myself forced to mount a laptop (or at least a netbook) onto the robot, as well as a 12V battery to power the Kinect (which also dissipates energy at about 12W).
ROAMR is meant to be an experimental and (self-)educational project, I'm not looking to implement anything revolutionary or something that will have enormous potential for practical application, it's really just for fun. For this reason, I don't mind that adding a Kinect and Laptop will make it rather big and heavy in comparison to what it started out as. If someone (or me, later) was to build such a robot with similar hardware but needs it to be smaller, something like a Beagleboard could be used to transfer the Kinect data remotely through, for instance, a ROS topic. I am also considering a Raspberry Pi, but I will have to wait and see if anyone can get the Kinect drivers to work on it. The calculations and SLAM would then be done from another machine, the ROS Master node if you will, which effectively acts as the brain for the robot. The problem one runs into with this, however, is again the large amount of ata produced by the Kinect - not all (if any, I have not gotten it to work yet) WLAN connections will be able to provide the bandwidth necessary, and as far as i know ROS stops publishing to topics once the bandwidth is too high. So perhaps some kind of pre-processing or compression would need to be done onboard the robot.
In any case, here is what I plan to implement or look into for ROAMR:
  • Since I have two Laptops at my disposal (one significantly more powerful than the other, let's call it the Thinkpad), I will first attempt to get the less powerful one to publish the Kinects data to the Thinkpad, so it can then run a SLAM algorithm, teleoperation, etc.
  • If this does not work, I will simply mount the Thinkpad onto the robot and do the computation on board. The robot could then perhaps be controlled by another computer through VNC, or SSH (although VNC is more useful, as a GUI is of course a nice thing to have when you want to actually see the robot mapping it's environment)
  • I will most likely implement the RGBD-6D-SLAM algorithm that can be found here, it utilizes both ROS and the Kinect, and allows for full 3D mapping and localization. i will have to find out how to extract the robot's perceived position in the map, and a way to implement a navigation algorithm. Another problem is presented, again, by the enoumous amounts of data produced. After a few minutes, these 3D point clouds can be as big as several gigabytes.
  • The alternative SLAM algorithm is this 2D-SLAM, which is nice because it is simple to use and has navigation already implemented, however it was made for the TurtleBot, so I'd have to re-code most of my robots steering (i.e. how the Arduino receives and interprets commands). Also, the algorithm needs odometry, which ROAMR doesn't currently provide. however, in comparison to RGBD-6D-SLAM, the maps produced by this algorithm should be much smaller in filesize, and perhaps more CPU (and especially GPU) firendly.
Diagrams and pictures soon, then this will perhaps make a lot more sense.

Saturday, July 2, 2011

ROAMR now has a Git repository - and a name!

I've set up a Git repository for my robot to make sharing (and updating) the code easy. I've also given it a name: Remotely Operated Autonomous Monitoring Robot - ROAMR.
It's not really autonomous yet, but that's where I'm headed with it. As of now, the repository includes all code I've written for the project, and a readme explaining roughly how to use everything. If you have any questions, just leave me a comment here or post an issue on the Git page.

https://github.com/phdowling/ROAMR

Friday, July 1, 2011

Preliminary robot control script

I haven't updated in a while, mainly because I kept on changing and adding things about the code which controls the robot. This mostly includes ROS (Robot Operating System) functionality related to Android devices, as my plan is to mount my G2 to the robot for things like GPS and orientation polling, automatic navigation or webcam feeds. I'm coming along on that code, but for now I'll just share the bare-bones python code which lets a user control the robot with the arrow buttons on a keyboard.


import serial
import pygame
import time
import sys
pygame.init()
forward=False
left=False
right=False
back=False
q=False
speed=220
screen = pygame.display.set_mode((100,100))

#ser=serial.Serial("/dev/ttyUSB0")
ser=serial.Serial("COM12")

while True:
pygame.event.pump()
key=pygame.key.get_pressed()

forward=key[pygame.K_UP]
left=key[pygame.K_LEFT]
right=key[pygame.K_RIGHT]
back=key[pygame.K_DOWN]
q=key[pygame.K_q]

if forward and not ((left or right) or back):
LDirection="+"
RDirection="+"
LSpeed=chr(254)
RSpeed=chr(254)
elif back and not ((left or right) or forward):
LDirection="-"
RDirection="-"
LSpeed=chr(254)
RSpeed=chr(254)
elif left and not ((forward or right) or back):
LDirection="-"
RDirection="+"
LSpeed=chr(254)
RSpeed=chr(254)
elif right and not ((forward or left) or back):
LDirection="+"
RDirection="-"
LSpeed=chr(254)
RSpeed=chr(254)
elif (forward and left) and not (right or back):
LDirection="+"
RDirection="+"
LSpeed=chr(200)
RSpeed=chr(254)
elif (forward and right) and not (left or back):
LDirection="+"
RDirection="+"
LSpeed=chr(254)
RSpeed=chr(200)
elif (back and right) and not (left or forward):
LDirection="-"
RDirection="-"
LSpeed=chr(254)
RSpeed=chr(200)
elif (back and left) and not (right or forward):
LDirection="-"
RDirection="-"
LSpeed=chr(200)
RSpeed=chr(254)
else:
LDirection="-"
RDirection="-"
LSpeed=chr(0)
RSpeed=chr(0)
#checksum=ord(LDirection)+ord(LSpeed)+ord(RDirection)+ord(RSpeed)
string="s"+LDirection+LSpeed+RDirection+RSpeed+"0"+"e"
ser.write(string)
print string
if q:
pygame.quit()
sys.exit()
ser.close()
time.sleep(0.1)


As you can see, pygame is used for the keyboard input (which is probably not optimal, but I will later use the pygame window to display actual data). Checksum is not implemented yet, mainly because I'm lazy, and also because the robot has been working fine without it anyway. I'll still fix it at some point just for good practice.

Sunday, June 5, 2011

Robot schematic and Arduino sketch

Last week I posted some pictures of a robot I've started building, at the moment it's really just a remote controlled car. I've put together a schematic showing the various pin connections. I use an Arduino Uno, an XBee radio, and an L293D motor driver chip, as well as two motors (mine are 9v).

Blue and red obviously represent power and ground lines, green connections represent digital or PWM signals from and to the Arduino, and the magenta lines are PWMs from the L293D, I chose a different color because they will run at a higher voltage than the Arduino PWMs, and they power the motors.

On the Arduino, I'm running the following sketch:


char index;
char serialbuff[7];
char ch;
int x=0;
int lSpeed;
int rSpeed;

int LMOTOR_DIR = 12; // Non PWM pin for direction control
int LMOTOR_PWM = 11; // PWM controlled pin.
int RMOTOR_DIR = 7;
int RMOTOR_PWM = 6;


void setup() {
Serial.begin(9600);

// Set the pins to output.
pinMode(LMOTOR_DIR, OUTPUT);
pinMode(LMOTOR_PWM, OUTPUT);
pinMode(RMOTOR_DIR, OUTPUT);
pinMode(RMOTOR_PWM, OUTPUT);
// And set these to a initial value to make sure.
digitalWrite(LMOTOR_DIR, LOW);
digitalWrite(LMOTOR_PWM, LOW);
digitalWrite(RMOTOR_DIR, LOW);
digitalWrite(RMOTOR_PWM, LOW);
}

void loop() {
switch(x){
case 0:{
ch=(char)Serial.read();
if(ch=='s'){
x=1;
serialbuff[0]=ch;
index=1;
}
}
break;

case 1:{
ch=(char)Serial.read();
if(ch!=-1){
serialbuff[index]=ch;
index++;
if(index==7){
x=2;
}
}
}
break;

case 2:{
if(serialbuff[6]=='e'){
x=3;
}
else{
x=4;
}
}
break;

case 3:{
if(serialbuff[1]=='+'){
digitalWrite(LMOTOR_DIR, HIGH);
lSpeed = 255-serialbuff[2];
}
else{
digitalWrite(LMOTOR_DIR, LOW);
lSpeed = serialbuff[2];
}

if(serialbuff[3]=='+'){
digitalWrite(RMOTOR_DIR, HIGH);
rSpeed = 255-serialbuff[4];
}
else{
digitalWrite(RMOTOR_DIR, LOW);
rSpeed = serialbuff[4];
}

analogWrite(LMOTOR_PWM, lSpeed);
analogWrite(RMOTOR_PWM, rSpeed);

x=4;
}
break;

case 4:{
index=0;
x=0;
}
break;
}
}


As you can see, the Arduino will read its serial port (the XBee) for the character "s", which will indicate the start of a control packet. once it's found this, it reads the next characters, these will make up the rest of the packet:
  • serialbuff[1] is either "+" or "-", indicating the direction of the left motor.
  • serialbuff[2] is the speed of the left motor and can be between 0 and 255
  • serialbuff[3] and [4] are the same as [1] and [2], but for the right motor
  • serialbuff[5] will eventually act as a checksum, though I haven't implemented this properly yet
  • serialbuff[6] must be "e" to indicate the end of a packet, otherwise the packet is discarded and loop restarts
You can see from this that the direction and speed of both motors can be completely controlled remotely, which gives us more opportunities as to how to steer the robot. For instance, instead of just using arrow buttons of the computer keyboard, one could write a script that converts accelerometer data (e.g. from a phone) into the appropriate motor speeds, allowing the robot to effectively be controlled through a phone.

Next up: the PC side script that sends packets to the robot.

Tuesday, May 31, 2011

Coming soon..

This is the platform for a small robot I'm building. It's remote controlled using XBee RF chips, onboard is an Arduino Uno. I plan to add a mount for my Android phone, for things like a webcam feed and sensor/GPS polling. Schematics and code soon!


Simulating birds


TL;DR: I wrote a simulation of swarming birds in Python using Pygame. The code can be found here.

So a while ago I caught a a bit of a documentary about swarming behavior in birds and fish, and I was interested to see if I could manage to write a little program to simulate such behavior. It's pretty interesting how the behavior of each bird is pretty simple and easy to model, yet on the macroscopic side, complex patterns emerge in such a system.

It's not too difficult to think about the behavior of each bird in a swarm:
  1. Primarily, it will try to stay with the rest of the swarm, which on one hand means that it will stay relatively close to the groups "center" (i.e. their average position), and on the other that it will roughly match the velocity of its neighbors (especially in terms of direction).
  2. Each bird also has it's own free will. On average of the whole group, this will determine the swarms tendency of movement.
  3. Finally, the swarm as a whole may be drawn to some type of goal (e.g. food), which means that when simulating a swarm, each bird will sooner or later move towards this goal, though it will likely be more influenced by where the rest of the group is going.
I won't tediously take you through every line of the program I ended up with, but I think for anyone interested in this type of program, there's two major things about my model that are worth noting.

View

It should be taken into account that birds will only be drawn to the birds that they actually see, so for my program I implemented a view angle that is used to determine what other birds each bird uses to orient itself.
In an earlier version of this program, each birds navigation was based on all birds within a certain radius, which ended up having the swarm group together much more closely, but ultimately didn't look natural.

So how can it be determined if one bird can see another? The simplest (and really the only practical) way to do it involves the scalar product of the "seeing" birds' (B1) velocity vector and the vector from that bird to the one the whose visibility we are trying to determine (B2).








If theta is smaller than the defined viewing angle of the birds, then B1 can indeed see B2.

In Python, we can write the following function:

def isinview(x, y):                      
if x==y:
return False

b1b2v = [birds[y][1][0]-birds[x][1][0],
birds[y][1][1]-birds[x][1][1]]

if (b1b2v[0]*birds[x][2][0] + b1b2v[1]*birds[x][2][1])/(vmag(b1b2v)*vmag(birds[x][2])) <= viewangle:
return True
else:
return False


Note: The viewangle variable is stored as the cosine of the desired view angle, so that acos() doesn't need to be called hundreds of times in every frame.)

To clarify, in the program, the birds variable is a list containing the data for each bird:
  • birds[n] is the nth bird
  • birds[n][0] is the nth bird's "name" (for debugging, highlighting certain birds, etc.)
  • birds[n][1] refers to the nth birds' position
  • birds[n][2] is the velocity
  • birds[n][3] and [4] are the last two velocity vectors, these are used as a component for the velocity calculation in every frame to simulate inertia, so that individual birds don't change direction so suddenly, leading to a more natural looking flight path.

Weighting

The other thing that needs to be considered is the weighting of the individual velocity components. Each birds' velocity in every frame is based on a number of factors: their free will (in the program, this is a random vector), the visible birds' positions, their velocities, a goal point, the birds past velocity vectors (inertia), and if so desired, the positions and velocities of all birds within a certain radius.
It needs to be decided how strongly each of these components should influence the birds flight. For instance, the random vector should probably be a weaker component than the inertia component, or the component that points towards the average position of the other birds should be fairly weak, otherwise the swarm will almost certainly never spread apart much.

The velocity vectors are therefore calculated this way: each component is assigned a "weight" in the beginning of the program. Each component vector is multiplied by its corresponding weight, and then all components are added together. Finally, the two numbers making up this sum vector are then each divided by the sum of the weights. This effectively creates sort of a weighted average. Perhaps this is not the most mathematically valid way to do it, but it enables us to alter the swarms behavior by playing with the weights of the individual components. For example, we can make the swarm more likely to spread apart by lowering the weight of the visible birds' positions to velocity.

If you would like to try out or play with the simulation yourself, the full code can be found here.

If you found this interesting, you may also be interested in Craig Reynolds' website on boids, he created and continues to work on sophisticated model of this phenomenon since 1986, there's tons of reading material on related subjects there, too.