Our Robot Kicks Ass!
Yeah. The title says it all. This morning I went to a talk about how the neocortex works, which was awesome. After that, we spent the rest of the day working on the robot. Today was incredibly stressful, and all of us gave up hope at some point. People finally realized that despite Ben’s claims, we haven’t actually been following waypoints in Wander Mode. We also found that the robot just stops if it’s approaching an object and then can’t find it any more. At one point, both Monte Carlo Localization and arrow following stopped working. Argh! However, when 6:00 rolled around and the judges came to see our stuff, our robot performed absolutely perfectly. It certainly hadn’t done that well yet in the conference, and it’s possible that this was the best run ever. We had 2 different parts: first, we had the robot go out in Arrow Following Mode, follow an arrow, find the beach ball, and bring it back to our table. We then had it go out in Waypoint Following Mode, find an orange dinosaur, then an orange cone, then an orange bowl. It followed a couple more arrows during this part. We then stopped it and showed the judges the map – the robot’s dead reckoning had been pretty good, and MCL made it perform perfectly! It identified all objects correctly, followed all arrows correctly, brought the beach ball back correctly, and plotted this all on the map correctly. I think Mac might have helped the robot put the beach ball back down, but I’m not sure about that. The judges asked us questions about our shape recognition, homebrew laser rangefinder, homebrew sonar, MCL, waypoint system, etc, etc, and we answered all of their questions without faltering! Ricco did tell one of the judges the incorrect way we find arrows (she explained the way we used to do it, before we had connected components and ellipse fitting with coordinate-free line fitting), but I explained how it currently works to another judge. The head judge was Manuela Veloso (sp?) (she’s in charge of the whole conference, and also heads up the Aibo soccer stuff at CMU), and I think she was pretty impressed with the whole thing. The other two judges, who I didn’t know, were impressed too. There was also a guy from the Pittsburgh Tribune who took photos of our robot and took down our names. With any luck, we (by which I mean our names and a pic of the robot) will be on the second page of the Local section of tomorrow’s paper.
Officially, this is a competition, and there is supposedly a winner. However, with 4-6 teams entered (depending on who you ask), it has turned into more of an exhibition. There might not be a winner. If there is, however, I think we have a great shot at it. The robot from Stony Brook was quite impressive in that the team built all their own hardware, and they really knew their stuff. However, they seemed to greatly underestimate how long it would take to code stuff up (for instance, when we met them, they claimed that they were going to implement SLAM in the next 2 days)(SLAM is Simultaneous Localization And Mapping, and is hard enough that I don’t believe that any other team attempted it). They had neat ideas, though: Their robot had a grasper that moved up and down, so it could pick things up off the floor or the table, if they had gotten it to pick up anything at all. The team from UMass Lowell had an impressive, $60,000 All Terrain Robot that was intended for urban search and rescue. Consequently, it was built like a tank, was completely decked out with sensors, pan/tilt/zoom cameras, a GPS, a wireless network, a high-end laser rangefinder, a spotlight, and about 2 dozen high-end sonar units, but it could do little more than follow a trail of green paper on the floor (similar to, but not as fancy as, our arrow following) and identify a few objects. The team from University of New Orleans (I think it was them, but I’m not sure) had another very expensive machine (I believe it was a Pioneer model from iRobot (the company, not the book/movie)), though it couldn’t do very much that I could see. It found an orange dinosaur and then turned in a circle avoiding the dinosaur and trying to find the orange cone. However, when I saw it, it was facing directly away from the cone, was slowly turning in a circle, and had the “low battery” light flashing. I feel kinda bad for them, though I didn’t get to see the first part of their run, and it’s possible they did something awesome. The team from University of Sherbrooke decided to skip the scavenger hunt in favor of the Robot Challenge (that’s the one where they had the robot attend the conference). From what I gathered, their robot, Sparticus, mananged to register for the conference, go to one of the talks, and then present its own poster session. It is incredibly impressive, as I mentioned yesterday. Luckily, however, it’s not competition for us. I’m not sure who else is in the contest, but I’ve got a pretty good feeling about this. If nothing else, there is no way we could possibly have gotten our robot to behave any better because it performed perfectly, and that’s really all you can ask for in the end. I’m pretty psyched about today. Tomorrow we’re giving a 10 minute presentation about our robot, and hopefully I’ll get to go hear Martin Keane give a talk on genetic algorithms (roughly speaking, it’s a way to evolve good, though not necessarily optimal, solutions to really hard problems). Hurrah!
Ambr? Home-brew? Budgit?
For the benefit of all Alan’s fans out there, I shall update you on our awesomeness.
Turns out there was a winner for the Scavenger Hunt. We won! The peasants rejoice. The winning team was awarded an Aibo. We’re trying to figure out what to name it (and the other Aibo Mudd already has). Currently I think our choices are Ambr, Home-brew, and Budgit.
In conclusion, go us, we rock.