IVR Practical Five

The purpose of this practical is to:
You should be able to run your programs to control the real Khepera robot. Using simulations might be helpful to test the programs but is not the goal of this practical. If you didn't use the robot in one of the first practicals, please go back and make yourself familiar with the basics tasks.
Note that if you did not download the IVR software to your own filespace, you will need to do so to be able to communicate with webots or the robot. The Matlab code is available here: http://www.inf.ed.ac.uk/teaching/courses/ivr/IVR_software_2008.tar.z and here is the Webots code (if needed at all): http://www.inf.ed.ac.uk/teaching/courses/ivr/kheperam-Webots6.2.tar.gz.

In this practical we will just try using a simple script to control the robot and to give it an ability to adjust its parameters.

Circumventing an obstacle

The robot is frontally approaching an obstacle (e.g. one of the wood-bots that are used in the assignment). It continuously checks its front IR-sensors. As soon as the robot senses the approaching obstacle it turns either left or right, say left. Now it should sense the obstacle with the sensors at its right side. If this happens (What could the robot do if not?) the robot can go straight again until it starts to loose contact to the obstacle. If this happens it will turn again, but now to the right. Moving forward it should sense to obstacle again. (What could the robot do if not?) Now the loop can start again. This obstacle-avoidance-and-re-approach should continue until the robot is "behind" the obstacle. The robot can use either its angle or the extrapolated path in order to determine whether it has already circumvented the obstacle.

The angle can be calculated based on the movements of the robot by a simple formula, but it it may be better to use a complete odometry (x-coordinate, y-coordinate and angle) in order to keep track of the robot's position.


In the first two practicals you have controlled the robot based on direct movement commands. For an autonomous robot it will be necessary that the robot can maintain an intrinsic estimate of the position.

What ever the robot does, it is effective only if the robot's wheels are turning. The wheel revolutions can be sensed from the wheel encoders. From these readings the robot can update its position where we assume that it started at the point (x=0, y=0, φ=0). The general idea is that the average of the speeds of the two wheels gives a good estimate of the distance travelled whereas the difference between the speeds tells how the bearing angle of the robot changes. Some geometric considerations lead to these formula for the x-coordinate, y-coordinate and bearing angle φ:

x ← x + Δx =x + 0.5*(vleft + vright) cos(φ)
y ← y + Δy =y + 0.5*(vleft + vright) sin(φ)
φ ← φ + Δφ = φ - 0.5*(vleft - vright)/(2R)

The formula contains, in addition to the wheel speeds (taken as counter values) , the parameter R that denotes the radius of the robot (or rather half the distance between its wheels). The parameter (about 4cm) can be determined by measurement, but it may not be sufficiently precise.

In order to calibrate the odometric formula you can make the robot turn (s. first practical) while calculating the angle. If the robot has turned exactly once (or a number of times for better precision) the angle estimate can be checked and the "parameter" can be tuned until the measurement is sufficiently correct.

Some useful functions

can be found in the first practical.
In addition you will need in particular these functions: unless you are using send_command etc. for direct communication with the robot. Sensor readings can be obtained by send_command('N') and read into a matlab structure by sensor=read_command. Note that the first entry in sensor is the robot's response (small letter 'n') which is followed by the eight IR sensor values.

One way of reading the sensor values is:

s2=regexp(sensor, '\,', 'split')
for i=1:8

If you have time...

Making the robot see

Robots should never be used as blind output devices. In addition to the on-board infra-red sensors and wheel-encoders, the robot can use visual information which it can receive from the control program. In order to be understandable to the robot the visual information must be reduced to the essential variables and transformed to the robot's world frame.

The first step is the extraction of the robot from the scene. For this purpose you can use background subtraction. It leaves you with a more or less round spot. The center of mass of the spot can be used as an approximation of the position of the robot. You may consider using instead the center of the smallest circle that encloses all of the robot pixels.

Place now two of the wood-bots near each other such that your palm fits conveniently in between them. Now, you may in the same way (or by using code from your assignment) extract the position of the wood-bots.

Now try to use visual servoing in order to move the Khepera right in between the wood-bots. If the wood-bots were placed appropriately (correct if necessary) this task can be achieved also using the IR-sensors on the left and right sides of the Khepera.

If vision is not sufficiently precise due to lighting conditions or computational problems, the IR sensors will provide you with more precise local information. On the other hand, if the Khepera starts in a distant corner of the box, vision might be preferable.

Making the robot think for itself

Can you come up with a decision function for the robot that invokes either visual servoing or IR-based servoing such that the final position in between the wooden mates is reached as soon and as precisely as possible?

Clearly, if the robot does not receive any noteworthy IR inputs it should rely on vision. Vision is also needed in order to disambiguate the wood-bots from e.g. a starting position in a corner. As soon however the robot is near the wood-bots and receives above-threshold input from both left and right IR sensors. If the robot had more time to repeat the task it could make use of a Bayesian decision scheme.

You may not have had the time to work yourself down here. Nevertheless, please try to consider all the tasks and discuss them with your tutor, because they will be part of the second assignment.