It turns out OpenCV had a function for accepting an array of pixels as an image all along. I stumbled across it by accident, after reading up on OpenCV’s “loadImage()” function. There was a note on the page saying that the function’s performance was garbage in OS X, and to use “copy()” instead, after reading in the image in Processing. So now, I can read in a camera feed, filter it for a specific color (ex. Red), and then plug that image into OpenCV for blob detection. I’ve also taken measurements on my camera’s field of view. Roughly 40 degrees wide, and 30 tall (a.k.a. 38:27). Makes sense, the camera’s aspect ratio 4:3.
So now I can put vision on the backburner and try and work out a navigation algorithm… Also on my to do list this spring break:
- Construct the control module/gondola of the blimp.
- Test it statically to ensure it responds to control signals properly. (i.e. motors and servos)
- Devise a method of attaching it to one or two helium balloons in a stable, balanced manner.
- Determine whether or not I will need to use visual aides to guide my blimp (human intervention in the competition is an instant 30% penalty).
- Determine if I will need to build a parabolic reflector to amplify signals at the base station.
- Investigate the use of foil/mesh RF shields to reduce interference on the blimp.
Should be fun, no?