by Luke Andrews, Grade 11, Cornwall Hill College, South Africa
In robotics competitions, one of the biggest challenges is trying to determine where your robot is on the playing surface.
It is theoretically possible to determine the position and orientation of a robot using a variety of sensors, such as light sensors, distance sensors, tachometers, compasses and gyroscopes.
However these sensors often result in inaccurate outputs due to real-life factors. For example:
More expensive sensors combine accelerometers, magnetometers and gyroscopes to generate more accurate measurements, but only handle orientation, and not the position on the surface.
There is, therefore, a need for a more reliable robotics positioning and orientation system.
The aim was to test the feasibility of using optical mice for tracking the position and orientation of a robot relative to the surface in real-time.
If optical mice could be used to track the movement of a surface below a robot, then it should be possible to calculate and track the position and orientation of the robot on that surface.
The goal was to design, build and test a prototype that would demonstrate the feasibility of using surface tracking to determine a robot’s position and orientation.
Using only two optical mice for tracking, and distance sensors to verify and correct the measurements based on detected edge positions, the prototype should be able to accurately determine a robot’s:
The prototype was not intended to replace other types of sensors, but to create an additional tool that could be used together with them.
Multiple sensors and algorithms were tested during the development and construction process:
The final prototype was constructed combining the best sensors and algorithms.
Several tools were used for analysing results during the development and testing of the prototypes:
The R programming language was used to analyse and visualise data collected by the prototype. ggplot2 and dplyr libraries were extensively used while analysing the data. Data could either be imported into R from a CSV file or read from a socket in CSV format.
Various functions were created to generate commonly-used graphs, such as the one below, showing the path followed and orientation of the robot.
A graphical interface was created, allowing data to be visualised in 3D in real-time, or to be stepped through, to examine each movement. The graphical display is written in Java, using OpenGL, and is run on a PC. Navigation is achieved either using a PC keyboard or via a game controller. Data can either be received in real-time through a socket or read from a file.
The graphical displays the following information:
Data from the sensors and mice was collected on the Raspberry Pi. Readings were adjusted based on calibration data and combined to calculate the robot position and orientation. Processed data could be analysed using R, or visualised using the graphical interface.
Six calibrations are required to determine the position and orientation of the mice sensors and the position of the surface-edge sensors. The calibration process was performed by collecting multiple sets of data and analysing it to calculate the position and sensitivity of the sensors and surface.
On the final prototype, data collected was analysed using trigonometry and mathematical equations. All calibrations are stored in a calibration file on the Raspberry Pi, which is accessible to the data collector and graphical display through a web server on the Raspberry Pi.
The ratios between the two mice readings is calculated after moving the robot forwards, backwards, left and right equal distances in straight lines in the 4 directions, with sideward movement being perpendicular to forwards.
Using the data collected during the Mouse Factor Calibration, the amount of sideward movement detected on the forwards and backward movement was used to calculate the angle of each mouse relative to the chassis. If the rotation of the mice is not applied to the measurements, the robot would appear to move in an arc when moving in a straight line.
The distance between the mice is an essential factor when determining whether the robot has turned, and by how much. By turning the robot 360° in each direction multiple times and recording the sideways movement of each mouse, it is possible to calculate the distance between the mice.
The distance between the sensors on each side is vital to be able to determine the position of the sensor. It is possible to determine the distance between the sensors on each side by moving the robot along an edge where a small bracket is attached, and recording the robot position when the sensors pass over the bracket.
The robot was moved over the edge of the table four times (backwards, left, forwards, right). If the mouse sensor is precisely centred between the two side sensors, then the distance of each sensor from the mouse is equal. It was possible to determine the distance of each sensor relative to the mouse, on the X and Y-axis, by combining this information with the distance between the sensors.
Determining the edges of the surface was based on the assumption that the surface is rectangular, and that the robot always starts in the same position. This calibration merely involves driving to each edge, taking into consideration all of the previous calibrations. The position of the robot, and the sensor relative to the robot when the sensor detects an edge, allows the position of the four edges to be determined relative to the starting position of the mouse.
The movement processor calculates position and orientation using mouse readings and calibration data. Distance sensors detected edges for adjustments.
Early prototypes used trigonometry to calculate chassis angles and the positions of the sensors on the surface.
The final prototype used matrices as follows:
Using matrices, the following operations were possible:
In the final prototype, all angles, positions and movements were stored in matrices. Orientation was only converted into an angle for display purposes, using the cos and the sin of the angle from the rotation matrix.
Multiple tests were conducted to determine the accuracy of the final prototype.
The following process was followed:
For each test, a photograph was taken showing the actual starting position of the robot, and the path to be followed. Results were recorded and plotted on a graph. The graph was overlaid over the photograph to determine the accuracy.
This test aimed to follow a curved line in the shape of a tree. The following was observed:
1 | The robot was started at known position and orientation. |
2 | The final position of the robot was 0.33° from the starting orientation, and the path recorded was almost identical to the planned path. |
The aim was to determine whether the robot could automatically correct its position when starting in an incorrect position with an incorrect orientation. The following was observed:
1 | The robot was intentionally started at incorrect orientation. |
2 | The orientation was corrected based on the known position of the surface edge. |
3&4 | The position was corrected based on the known position of the surface edge. |
5 | The final robot’s calculated orientation & position matched the actual orientation & position. |
This test aimed to determine whether the robot could automatically correct its position on the surface when starting at a known angle but an unknown position. The following was observed:
1 | The robot was intentionally started at an incorrect position. |
2&3 | The position was corrected based on the known position of the surface edge. |
4-5 | After recalculating its position from 2&3 the calculated positions match the actual positions. |
Thousands of tests were conducted during the development of the prototype. During the development and testing, results were significantly improved by:
Gyroscopes tend to drift over time. Some more expensive gyroscopes use a compass to recalibrate themselves. Without a static reference, such as the earth’s magnetic field, it is almost impossible for any sensor to remain accurate. This sensor is no exception. It will, over time, become less accurate. The prototype automatically re-aligns itself when it detects an edge, both in terms of position and orientation, to compensate for this drift. As long as the angle remains within 25° of the actual angle, it can correct its orientation when it identifies an edge on the surface.
Measuring the accuracy of the readings is difficult due to the infinite number of possible routes and orientations. The finishing orientation and position are not necessarily an indication of the accuracy of the path followed.
The design goal was achieved. The prototype was able to calculate the position and orientation of the robot on the surface, in real-time, using data obtained from two optical mice, and adjust this information based on observations from the distance sensors.
The following ideas were tested:
After investigating various alternatives, it was decided to develop a sensor using a Raspberry Pi v2 Camera. A dedicated Raspberry Pi was added to analyse and process two sections of each image to emulate two mice.
Tests showed that it was possible to capture 950 small images per second and analyse them in real-time. Tests using a single camera were not as accurate as using two separate mice because the images were only 11mm apart.
The next prototype will include two of these camera sensors, one on each end of the robot. This should produce similar results to the mice, with the advantage of being mountable on a moving vehicle.
The following improvements will be tested:
I would like to thank the following people & organisations who inspired this project and made it possible:
The following documents are available for download by authorised users: