TigerBug: What’s your vector, Victor?

This is part of TigerBug, a robot design I am working on for use at RIT for research and as a fundraising tool . Take a look at the project hub for more progress on different components of this project. 

This update is about 2 weeks late due to the holidays, but a modicum of progress has been made on TigerBug. The electronics are still nonfunctional (which I’ll chronicle…soon), but the theory behind the bee simulation does work!

Some form of Tigerbug needed to be completed in time for the end of my robotics course this fall (getting passing grades are highly recommended in college, so I’ve heard), and with the electronics having…reliability problems, I made the decision to take the project in a different direction, for now. The boom in academic robotics research has brought along a great selection of robotics simulators, software that either provides models of existing robots, or the tools to create robot models, and simulate their motion/actions in a workspace. Simulation complexity can range from simple 2D models with basic interactions to advanced 3D simulators with realistic physics models, the ability to have multiple agents running simultaneously, and to have these agents communicate  and interact with one another.

I went with V-rep from Coppelia Robotics for Tigerbug after receiving several suggestion from grad students in RIT’s robotics department. It uses Lua script to control the robots, and provides an easy method of importing meshes to use as components of the robot. After a few late nights reading through their help files and tutorials, I was able to create a crude model of Tigerbug within the software.

The model was simplified to improve performance with a large number of agents.
The model was simplified to improve performance when simulations are run with tens to hundreds of these models

As you can see, the model is very barebones, and only contains 7 sensors: angular position for each wheel, the four force sensors on each corner, and a single-pixel image sensor in the front. The IR sensors are not included in this model, but if the hardware continues to not cooperate, I might end up testing communication between the robots in the simulation.

To operate the different modes of the bee simulation, I used state machine logic, which is a technique we used while I was working on automation systems on one of my internships. This technique works well with a scripting language, or any embedded system really, since it is important to keep the cycle time of the script low to improve performance and reaction time.

To navigate its workspace, the bee uses vector sums to keep track of its current position. I could have easily used the global position/distance variables available in the software to get this, but since I want the simulation to work in real life as well, I had to work with data collected from the robot. The average wheel position was used to get the magnitude of each move, and the absolute heading was converted to give an angle in the robot’s reference frame (this will be found using a magnetometer sensor on the real robot). The bee adds these values when it encounters an obstacle, and moves to get out of its way. When the right color is detected by the color sensor, the bee returns to the hive by following the sum vector it has measured. The video below shows the script in operation, with the large yellow circle representing the resource. The orange target represents the start point of the bee and is mostly to show the error accumulated.

The bot produces a fair amount of error while travelling through the workspace, this is likely due to the wheels slipping. This error isn’t tiny, but since the bee doesn’t need to be very precise, I think it within margins. I also don’t expect wheel slip to be a major issue with the hardware robot, as it has driven on several surfaces now without this issue. I’m pretty sure Vrep has issues with contacting surfaces sliding slightly, some further testing might confirm this.

Other software components to finish developing include path adjustment on the return route, navigating around other bees, getting out of the way while idle, and possibly communicating the resource location to another bee in the hive. Now that I have a few weeks in between classes, I hope to nail these down, and maybe get the actual hardware working (on top of a lot of other tasks I was pushing down the road…). I’ll be sure to update when that happens!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s