Meet Gbot, a DIY Pi-Rover. via emotep
I grabbed one of the first RPI’s just as the model B became available, Now it’s powering my own rover! There are some challenges with video, but overall it’s a fun build and is still getting better every day.
Gbot has 4 cameras total, one mounted on a Dream cheeky missile launcher turret for 270 degree view, One mounted on the nose( shown in lower images) and 2 in the Xbox Kinect; one color and one IR. If you don’t already know, the Kinect delivers grayscale 3D(depth) via infrared, which makes it easy to do object detection and avoidance. It also has an excellerometer and a mic, and I think some other stuff that I’m not interested in. ( Check it out).
I’m using an old Kyocera cellphone for GPS, and a 4 wheel drive R/C chassis I found at a junk store. The battery is a 12V gell cell(deep cycle) which powers the Kinect directly, and a Duracell usb hub via a car cigarette usb charger which gives out a clean 5V(clean enough).
I made a patch cable to power the USB hub by splicing the ground and vcc(power) and putting a diode between the RPI and the Cig adapter. That should(SHOULD) prevent feedback current going into the pi, and has worked so far. My first attempt was to disconnect the usb power from the RPI completely, but it seems all my devices require some sort of handshake on voltage.
I’ve written some software to drive it.(Python) I use 3 modules, Bridge.py, Targeting.py and Engine.py.(custom) I could have used instances, but I wanted it to perform as though I was a passenger in one of 3 decks( like on star trek). The information is transponded via sockets, and all commands go through the engine room for processing and management. From the bridge you can toggle camera views etc, engine room controls movement and gps , and targeting is of course the turret
The software, as mentioned is custom. My first serious attempt to write something others might want to use and hack. On the left, the engine room console. On the lower right, IR depth radar(very cool). just above the radar grid is a front facing view that peals it’s way back as the radar scan progresses from bottom(close) to top(far). I also adde some motion targeting(toggled on here) and I’m planning to add audio detection to turn the turret, but Linux doesn’t like to listen.
Problems: RPI+video processing= FAIL! It works, but it’s sadly slow.. On a similar spec’d HP desktop I can drive 3 cameras before It becomes annoying, but on the RPI, one will make you want an arduino, Still, I already know the RPI, so BLAH! I’m tweaking Bridge.py so that all cameras are 177×144 on cv.queryframe(), then if a high res image is requested, it will simply change the query resolution. Also, only one cam will be online at any time, Perhaps 2 if the RPI will eat Kinect IR images and a mini-color image at the same time. If anyone has any better solutions for frame grabs, let me know! I will update the blog with dependencies and such at a later date.
The next step for me is to build/buy a motor controller. I had planned to just use some transistors to toggle via GPIO, but I want full directional control, and that’s getting more confusing than fun.