Okay, so I've been pretty busy/on vacation lately and haven't been working on ROAMR very much, but I have made some progress which I'll talk about in a bit. First of all though, I wanted to share that I've finally setteld on where I'll be studying this fall. A while ago I had gotten accepted into Jacobs University Bremen, and I was pretty excited about it. However, they didn't offer me any scholarships so I decided not to take the offer. I was then considering KIT (Karlsruhe Institute of Technology), and was still waiting for a decision from the Technical University of Munich. I settled on TUM after getting the decision the night before my birthday. KIT and TUM are ranked pretty similarly by most sources (KIT tends to do a bit better, to be fair), but I decided on TUM because Munich is simply a much more exciting place to live, and I like their campus better. Also, they have slides in the Computer Science and Mathematics building. In addition, they appear to serve Pretzels and free beer during freshman orientation.
I believe if those slides were in the faculty for Physics, they would probably just lead right into each other.
I spent the last few days in Munich looking for an apartment (or a room, rather), and I finally found a nice one close to the campus.
I can't wait to move in, I should be getting there sometime in September.
So, now for the ROAMR project. I decided that for this sort of robot, it would be useful to implement a SLAM algorithm for mapping of the environment and autonomous navigation of the robot. The obvious sensor choice to be was a Kinect sensor, as it is fairly cheap and allows for full RGB-D mapping of the robots environment.
The downside of using a Kinect sensor is of course the fact that the robot will now become significantly bigger, and heavier. The Kinect feeds a very large amount of data, certainly too much for an Arduino to handle (not that the drivers exist anyway), so I find myself forced to mount a laptop (or at least a netbook) onto the robot, as well as a 12V battery to power the Kinect (which also dissipates energy at about 12W).
ROAMR is meant to be an experimental and (self-)educational project, I'm not looking to implement anything revolutionary or something that will have enormous potential for practical application, it's really just for fun. For this reason, I don't mind that adding a Kinect and Laptop will make it rather big and heavy in comparison to what it started out as. If someone (or me, later) was to build such a robot with similar hardware but needs it to be smaller, something like a Beagleboard could be used to transfer the Kinect data remotely through, for instance, a ROS topic. I am also considering a Raspberry Pi, but I will have to wait and see if anyone can get the Kinect drivers to work on it. The calculations and SLAM would then be done from another machine, the ROS Master node if you will, which effectively acts as the brain for the robot. The problem one runs into with this, however, is again the large amount of ata produced by the Kinect - not all (if any, I have not gotten it to work yet) WLAN connections will be able to provide the bandwidth necessary, and as far as i know ROS stops publishing to topics once the bandwidth is too high. So perhaps some kind of pre-processing or compression would need to be done onboard the robot.
In any case, here is what I plan to implement or look into for ROAMR:
- Since I have two Laptops at my disposal (one significantly more powerful than the other, let's call it the Thinkpad), I will first attempt to get the less powerful one to publish the Kinects data to the Thinkpad, so it can then run a SLAM algorithm, teleoperation, etc.
- If this does not work, I will simply mount the Thinkpad onto the robot and do the computation on board. The robot could then perhaps be controlled by another computer through VNC, or SSH (although VNC is more useful, as a GUI is of course a nice thing to have when you want to actually see the robot mapping it's environment)
- I will most likely implement the RGBD-6D-SLAM algorithm that can be found here, it utilizes both ROS and the Kinect, and allows for full 3D mapping and localization. i will have to find out how to extract the robot's perceived position in the map, and a way to implement a navigation algorithm. Another problem is presented, again, by the enoumous amounts of data produced. After a few minutes, these 3D point clouds can be as big as several gigabytes.
- The alternative SLAM algorithm is this 2D-SLAM, which is nice because it is simple to use and has navigation already implemented, however it was made for the TurtleBot, so I'd have to re-code most of my robots steering (i.e. how the Arduino receives and interprets commands). Also, the algorithm needs odometry, which ROAMR doesn't currently provide. however, in comparison to RGBD-6D-SLAM, the maps produced by this algorithm should be much smaller in filesize, and perhaps more CPU (and especially GPU) firendly.
No comments:
Post a Comment