With just under 2 months to go, it wouldn’t be fitting to have an update without bringing up the recently posted rules for the 2011 IARC. The new competition objective is to essentially navigate around some pillars, and land on a target. There is nothing in the way of environment interaction, and you’re penalized on volume. This is bad news for us/me because I designed for an aggressive, moderate-duration mission. The 2011 competition can be accomplished with an off the shelf R/C helicopter and a camera duct taped to it. If a team had money and a smart CS major, they could even hack a quadrotor with a Kinect and dance around the field. I fear my blimp may find itself completely outclassed, in part because there is no way for me to start from scratch with 8 weeks left in the design process, in part because I’m trying to do computer vision by myself, and in part because I don’t have several hundred dollars to blow.
Speaking of computer vision, I’ve explored a bunch of Processing libraries and the only one I’ve gotten to work is an obsolete OpenCV build. Version 1.0. The latest is 2.2… There’s no color filtering option in it, so my only choice is to import the video feed to processing, edit the color channels, then try and plug static images back into OpenCV. It’s not exactly the most efficient flow of data, but it’s all I can do short of pointing a webcam at my computer monitor. I’m currently seeing whether or not I can draw a PImage() and plug that straight into OpenCV.loadImage(). If not, I’ll have to write a ‘buffer.jpg’ file to my hard drive and read that into OpenCV. I’m hoping I can keep everything in memory though instead of waiting for file I/O operations. Worst case, I’ll run everything off a flash drive so I can have a relatively low-latency storage system that doesn’t need to cache or load system files.