Robomagellan 2010: This Time, it’s Navigational

May 27, 2010

Note: This post not yet edited; it also doesn’t have images or links yet.

So it’s about time I wrote a proper first post for this blog. Luckily, after last weekend’s trip to Maker Faire and all the robotics I’ve been up to in general over the past nine months or so, I’ve got plenty to talk about. Be warned (if you can’t tell already): I ramble worse than the titular character of a famous Allman Brothers Band song… especially because these posts end up striking me as a good idea to write at four in the morning.

So, without further ado, a quick post about the UCSD Robomagellan bot, “ANA Type R” (stands for “Autonomous Navigation Assistant”, with the Type R for Race… I didn’t make it up >.>). ANA was constructed by an IEEE team of UCSD undergraduates.

The Challenge

Robomagellan is an annual robotics competition held in the Bay Area at the _RoboGames_. The official ruleset can be found _here_. Basically, it’s an autonomous robotics competition that has slightly varied terrain, and the goal is to build a bot to reach an end cone, which you are provided coordinates for. There are bonus cones that are a multiplier to your time (resulting in a fraction of your actual time being your “score”), but it’s easiest to just optimize your bot to tackle the main cone challenge, and leave the others alone; as it is, most teams don’t complete the course. Those who hit the final cone are judged by time, those who don’t are judged by distance to the final cone.

The 2010 games were the second year I participated in the construction of a Robomagellan bot, but the first year we actually successfully fielded one.

The Construction

Starting with a partially working robot based on a _Traxxas E-Maxx_ RC car chassis, we scrapped our main board (an Epia Pico-ITX motherboard, plus messy power supply) in favor of a hacked _chumby One_. As an intern over at chumby during the design and testing of the One model, I was able to source a working prototype board for our development; by then, we had moved to mostly production hardware, so the board was effectively trash to the company. The nice part about all of this was that I was already very familiar with the platform (having spent hours testing it), and had IM support from _@xobs_ and _bunnie_ whenever I needed advice on drivers or hardware, or was having any sort of trouble with the board.

So, the new platform was constructed based on a nice embedded Linux board. Naturally, being a chumby insider was nice, but all the circuit diagrams and code I was using was the same stuff that was freely available on the web site anyway. Gotta love open source and hardware, and companies that are built around them!

So, without boring you with details of the actual construction process, I’ll just break down the systems we developed, and the process of interfacing them with the chumby:


As I said before, we were working on a fairly stock Traxxas chassis. Luckily, it has a motor controller built on-board, and dual servo motors for handling turning; all we really had to do to interface with each was emulate the RF receiver that normally sends the control signals from the driver’s remote; luckily, that’s a simple matter of Pulse Width Modulation (PWM). After scoping the car while it was running, we discovered it was running the signals as PWM at 5V, at the standard 50Hz (with a duty cycle of 1.5 milliseconds for “neutral”, and extremes at about 0.9 ms to 2.1 ms, roughly). This is about in line with what I’ve seen in the past for RC and hobby servos, so it certainly didn’t surprise me at all; we just had to figure out the best way to create that PWM.

The PWM interface was actually something that I wasn’t really happy with on the former incarnation of the bot. I had created a USB device out of a PIC 18F4550 circuit, but it was just… basically, it was just stupid. It looked like a virtual serial device, it was shoddily coded, and it just didn’t make much sense. As such, it made sense for us to seek out an alternative, now that we were on this new platform. Upon examination of the _chumby One Schematics_, I found this little darling:


So there were two PWM’s, MUX’d with the DUART pins (Debug UART, or the serial connection that’s brought out to pins on the device). Sweet! I already have a header there! Too bad I use that for most of my interfacing to the board, currently…

But that’s a problem that was easily solved (see the next section). Once I switched pin modes, wrote the right registers to set up the hardware PWM’s, and hooked everything up, I was in business. At first I was afraid that the 3.3V signal coming off the chumby’s processor (the i.MX233 from FreeScale, which is actually a pretty cool little ARM9 SoC) wasn’t going to be high enough voltage for the motor controller or servos, but those fears were unfounded — once I gave them power and a signal, they reacted just like I expected.

The only real problem is the fact that the boot sequence of the chumby doesn’t set them to PWM right away; in fact, because it boots them as UART’s, if the motor controller/servos are connected at time of boot, the chumby hangs in the bootloader. Haven’t really explored exactly what’s happening, to be honest, since it’s a simple bootloader hack to make them boot in PWM mode instead (in fact, @xobs has offered multiple times to do it for me), but disconnecting/reconnecting the three-pin header we have on the board with each reboot is simple enough, and if it ain’t broke… you get the idea. We also had plenty of other problems to deal with at the time of the build, so it was certainly a low-priority fix.


So at first we were attached via Serial cable for our initial testing. Not a big deal; this was how I did most of my testing at work, so I was used to it. Optimally we would’ve attached ourselves to the school network, and SSH’d in, but at the time we didn’t have the WPA-Supplicant drivers that are now standard on chumby Ones (and are way better than the old ones), and as such we didn’t have the ability to connect to the enterprise WPA at UCSD. However, needing the TX/RX pins of the debug serial port pushed us to find a better interface for programming to it. Over the summer, during development of the chumby One, @xobs actually created the scripts that eventually became the _3G Router Hack_. We were planning on using it for some test equipment in our factory in China, actually; pretty nifty how it ended up being useful elsewhere. Anyway, the important part of that hack for us was the fact that it configured the Wifi card in AP Mode — the chumby would, for all intents and purposes, look like a router to other devices (such as the development laptops). So now, our development and debugging could be done by connecting a laptop to the robot’s Wifi network SSH’ing in, and working from there. At some later date I’ll probably do a post on exactly how to set this up for yourself.

Object Avoidance

One of the few things we got working pretty well back before the 2009 competition was the interfacing of our Sonar Rangefinders (Devantech SRF-08′s, specifically). We decided to simplify things anyway, and limited the array down to two modules — one for the front, and one for the “outside” (right side). Interfacing them is a simple matter of I2C, made even simpler by the fact that our original code was working. All we had to do was re-compile it for the chumby platform (arm-linux-gcc FTW) and we were pretty much good to go. Admittedly, we used a bit of a cheat; the chumby One has an I2C bus on the device, but instead we opted for the USB-I2C adapter we had all the existing code for. Since we had a small team with limited time, we opted for our pre-existing code plus a simple control system: If you “see” something in front of you, turn left. Once you see that thing on the right side, and then *stop* seeing it, you’ve “passed” it, so you can recover your heading (more on all this later, in the “Control Logic” section). This actually worked surprisingly well, to be honest; it’s a stupid simple control system, it could easily be duped, but the play field was unlikely to have a situation that would truly mess our system up. Sometimes you have to engineer for the task at hand, and not Over-Engineer… scratch that, maybe I just failed in my job on that one. Anyway, long story short, we were happy with the control system we came up with.


We originally bought a decent compass. We replaced it with an awesome compass. Writing the drivers was a breeze (I did it in about an hour). The beauty of the OS-4000T that we got is that it’s so darn robust. You can turn it any which way, and it will still be able to give you a damn good reading on what direction you’re pointing (relative to it’s orientation, of course). This makes it awesome for a robot, which won’t be a stable/level enough platform for another, non-compensated compass to be effective on. There was a lot of magic going on in that compass, to be honest; it’s a great little package that takes care of a lot of potential problems just by calibrating it. It maybe could have used a little more separation from the robot’s EM field generators (AKA all the electronics and motors and junk), but it did a very admirable job.

The GPS, however, did not. We used a standard USB GPS from Fry’s (I think it was a SiRF-II chipset), and it worked OK… but it just wasn’t very trustworthy. We experimented with some different methods to get better data out of it (stopping, averaging, etc.), but none of us had much experience with GPS… nor a whole lot of time to dedicate to the challenge. We got a semi-working strategy down, but it was all for nought — the PL2303 USB-Serial chip built into the module went tits-up not 48 hours before the competition, so we had to toss GPS anyway. Prolific’s PL chips are fairly popular for USB-Serial interfacing; they’re cheap and all over the place. Chances are if you’ve worked with USB-Serial adapters in the past, you’ve seen one or had to install drivers for them. They’re not as stable or reliable, however, as the FTDI chips, in my experience. So if you’re looking for a USB-Serial adapter… I’d recommend _FTDI cables_ and _circuits_. I’ve also had good experiences with the _CP2102_ from Silicon Labs, though I haven’t seen them embedded in many devices. The CP’s and FTDI’s have just not caused me any problems.


Our vision system is a statically mounted _CMUcam2_. It’s a neat little camera/integrated video processor, but honestly it’s a little bit low resolution for our needs. To be honest, I didn’t touch this a whole lot; our source and configuration from last year worked for the most part, and other people on the team integrated it code-wise into the system, so all I really know is how to control the color threshold values… the module itself takes care of determining the X position of the orange cone (the only thing we use vision for); all we then have to do is make our control system line it up to the center and initiate ramming protocol. We only flipped the bot once, doing this. And we didn’t find it funny at all. Honest.

… what?

Electronics Integration

This was where I really got to enjoy myself; the simple pleasures of applying molten metal to other metals, making circuits that work, and generally routing all the electrical signals and power. What it basically breaks down to is this:

There are three voltage levels to worry about on the board: the “HV” as we called it (the motor voltage, supplied by dual-NiMH packs), the battery voltage (7.4V from a dual-cell 5200mAh LiOn pack, only directly used for the camera board voltage input), and the electronics voltage (5V which *everything* else runs off of). We create the 5V source using a _nicely packaged 10A variable power supply_. This was a huge improvement over our old power supply circuitry; that was an ugly mess of wires, caps, and heatsinks on a breadboard. The 5V is of course brought down on the chumby to 3.3V in locations, and distributed to all of our USB devices.

Speaking of USB devices, that’s another awesome feature of this bot: pretty much everything attached to the chumby was just USB; the only exception being the motor interfacing PWM’s I already discussed: the camera went through a USB-Serial adapter, the Sonars were all on the USB adapter’s I2C bus, the GPS itself was a USB device, and the compass went through a _3.3V FTDI cable_ with a custom connector. All of it was attached to a hub, and plugged into the special header called the “chumbilical” (note: the c1′s chumbilical != the Classic’s… you’ve been warned. If this doesn’t make any sense to you, please disregard this message and have a nice day.). Power was also input into the chumby through this connector (which, ironically, I had the correct female header (with shroud!) for, but failed to go down and get the proper IDC connector for, resulting in the mass of female header and hot glue you see in the pictures). 5V input basically goes straight to the USB bus, so as long as you have sufficient supply current to run all of the USB devices, you’re golden. I never actually tested current draw of all of the electronics, but we at one point got 4+ hours on a single charge of the 5200mAh battery while debugging code in the field, so it can’t be a whole lot more than an Amp when idling. I suspect less, actually.

Yet another nice feature of the chumby One is the screen. We actually used a screen with a chumby Classic bezel (easier to mount/a little more robust), and only ever used it to display error messages and color the screen for our state machine… but it was very useful in those regards. It certainly did it’s job, though we could’ve gotten more battery life by turning it’s backlight off had we needed to.

Control Logic

The control system was constructed in modules, with each driver running in a separate thread and polling/reading data as necessary. Lots of termios.h action going down (aside from bash glue scripts, all our development was in C) for the various serial devices. We then constructed a state machine based around the modes of operation. This is really a topic that deserves it’s own post, to be honest… this post is long enough as it is. So I’ll get back to it, eventually.

The Competition

So, as I mentioned earlier, our GPS died not two days before the competition. We spent a fair amount of time praying, performing voodoo rituals and sacrificing to the various Gods of Engineering, but to no avail. We pretty much showed up with a bot that was DoA. We adjusted our code to go in a straight line, find, and track the cone. When we saw the course, however, we witnessed that it was a L shape, around a large building to the right; there would be no straight line path to take. Frantically, we came up with a last-ditch effort: throw out the object avoidance, throw out the GPS, and just go straight for the second bonus cone (the only one that had a straight-shot from the start, and to the end). We told the bot to stay on the first bearing given (calculated from the GPS coordinates provided by the organizers), then as soon as it detected orange, veer right and take on the second heading (again, calculated from the GPS coordinates). It was ugly, but the theory was sound.

Our first run came and passed without us getting off the starting blocks; someone failed to re-comment some code, and our destinations were being overwritten. We couldn’t fix it and recompile it in time. The second official run, however, went a little better: we got off the blocks, and the bot’s bearing was about right for the bonus cone we had aimed for. Unfortunately, the wired kill switch got tripped over (our normal “runner” had gone inside to watch battle bots, blissfully ignorant of the fact that we were running the bot), and the bot veered off course and rammed itself quite decisively into a lamp post. Sadly, the lamp post wasn’t even orange.

The third official run, we decided dead reckoning would be more reliable than hoping that we got the angle right, and that it detected the first cone. We added a timer to the code (modularity and flexibility FTW again), and quickly realized that we should’ve been paying much more attention to how long each leg of the run had been taking. We threw a random failsafe number in there (18 seconds, I want to say), and ran the bot, knowing that it would be a fun last run, and would at least show off that the bot could turn into something other than a lamp post. That time, of course, it hit the building instead. But in all, it was a good run: lots of people were actually having trouble with their GPS modules, and the end results put us in third for our 70+ foot off run (within inches of second place, in fact). Nobody reached the end goal, and only UCLA’s team actually came close (great job, guys!). We took solace in the fact that we were, by an order of magnitude, faster than any of the other competitors, despite speed limiting ourselves to 40%.

We were actually quite happy with the results; despite not having any plan going in the morning of the competition, we were able to hack together a horrible dead-reckoning system and got ourselves onto the podium. We did two exhibition runs after the fact (between other competitor’s set-up times, since our bot was nice and speedy), and the second actually would have earned us second place, within about five feet of first. Oh well!


We’ve already started improving our bot for next year’s competition. I’ve ordered a _newer, spiffier GPS_, and a PCB to use one of _these USB Quad UART’s_ to eliminate the problematic PL chips from our project entirely. Our sensor array has been expanded so that the software supports five sonar sensors (we have four at the moment), and the chumby One’s going to be replaced with a newer chumby board (something that I’m very excited for, and that you’ll just have to wait until I finish my next post to find out about). Many other improvements have been planned (we’re looking at a _Surveyor Camera_ to replace the CMUcam2 with, if we have room in the budget).


Stay a while, and listen…

January 1, 2010

Hello, and welcome to the Over-Engineer’s blog. This is just an intro post, so let me explain who I am. I am, naturally, the titular “over-engineer” — I’m an engineering student with a passion for creation and general hackery. Electronics and computers are my forte, but I dabble in a variety of things. The goal of this blog is basically to provide a place for me to document and share my conquests in the world of creating overly complicated solutions to relatively simple challenges and problems.

It’s my hope that the various projects I talk about here on the site will interest and inspire others to take on their own creative challenges, and as such I want to make it clear that questions and comments welcome on the various posts that I put here. Just keep them clean and constructive, thanks.