Interactive Robotics Illustration

Robotics is the science of perceiving and manipulating the physical world through computer-controlled devices.

Robotics systems are situated in the physical world, perceive information on their environments through sensors, and manipulate through physical forces.

Probabilistic Robotics
Unsplashed background img 1

Mobile Robot Localization

Intelligent autonomous mobile robots need to know where they are, in order to function correctly. For example, a self-driving car needs to know which street, and which intersection it it at, in order to plan when and which turn it should take next. While some robots have access to GPS readings, such GPS readings are still not sufficiently accurate to make accurate decisions - the error of a commercial GPS is greater than the width of a driving lane, and the decisions taken by a self-driving car need to account specifically for not only which lane it is in, but exactly where in the lane it is.

Similarly, indoor mobile robots similarly need to accurately know where they are, despite having no access to GPS. Thus, to reason more accurately about their locations in the world, mobile robots rely on additional on-board sensors such as laser range-finders and depth cameras.

The problem of estimating where a robot is, given its on-board sensor readings over time, is called mobile robot localization.

In this page, we explain the sources of uncertainty for the problem of mobile robot localization, and one popular algorithm for mobile robot localization, called Monte-Carlo Localization, works on robots.

Uncertainties in Robotics

Robots, as physical entities acting in the real world, inherently suffer from errors in sensing, and locomotion. Such errors result in uncertainty of the robot’s estimates of its location. Let us investigate the nature of the uncertainties arising from a robot’s motion, and its sensing.

Unsplashed background img 2

Uncertainties in Robotics


Uncertainty Arising From Motion

Uncertainty in a robot’s motion arises from a large number of factors, including differences in wheel diameter, tire inflation, friction with respect to the ground, and uneven terrain. Thus, even when asked to execute the same command repeatedly, a robot will inevitably have errors in its actual execution. This demo simulates the errors in execution of motion commands on a real world robot. The blue line represents the planned motion, while the red line represents the robot's actual motion.

Try the demo

This demo simulates how real world robots follow commands. The blue line represents planned motion while the red line represents robot's actual motion.


Different Types of Actuation Noise

The errors in execution of motion commands can be broken down into four components:
E1

When the robot is commanded to rotate by a certain angle, the true rotation of the robot might differ from the commanded angle.

E2

When the robot is commanded to move straight by a certain distance, the robot may not drive straight, and instead turn by some amount while traversing the commanded distance.

E3

When the robot is commanded to move straight by a certain distance, the robot may not move by that same distance.

E4

When the robot is commanded to rotate by a certain angle, the robot might, in addition to turning, skid sideways or forward. This type of error is significant on robots with tank treads and multiple wheels with skid-steering.

You can see the impact of each of these types of error on a robot’s motion. In the demos below, you can observe the impact of each type of error on the overall executed path of the robot. Try adjusting the slider to control the magnitude of the noise, and observe its impact on the path followed by the robot.


E1
E2
E3
E4

More About Error

While the demos above simulated a single outcome of a robot at a time for each type of error, we can also concurrently simulate many possible outcomes for the errors. The demos below simulate each of the four types of errors in a robot’s motion over many possible outcomes simultaneously, to demonstrate the range, and distribution, of the robot’s possible locations over time as affected by the different types of motion errors.

E1
E2
E3
E4
Note how the errors accumulate over time: The possible outcomes of the robot spread apart, but in a different manner for each type of motion error.

Simulating A Robot’s Motion On A Map

Putting together all the four types of errors in a robot’s motion, we can simulate how a robot’s path may diverge from its planned path, over time. In this demo, a robot is asked to follow a fixed trajectory(shown as the green path) in an indoor office building. The blue outlines are simulated outcomes of the robot’s location over time. Note how the many simulated outcomes of the robot’s location diverge over time. You may adjust the magnitude of each type of motion error, and see how it affects the outcomes.

Clearly, in order to effectively work in the real world, the robot cannot expect its true location to match its planned path. However, we can see that we can simulate many possible outcomes for the robot’s location over time: this will be crucial to our goal of performing mobile robot localization. While we may not know exactly where the robot is after executing a motion command, we can simulate the distribution of possible outcomes. This distribution of possible outcomes can then be further refined by additionally taking into account observations from additional sensors.

Uncertainty in Sensor Readings

Sensor Limitations

To assist in the task of localization, a robot relies on observations made with its on board sensors, and a known map of the environment. Laser range-finders are one type of commonly used sensors on mobile robots. They allow the robot to sense how far obstacles are from the robot by measuring the time taken for a laser beam to traverse from the sensor, to the obstacle, and back. The laser beam in a laser range-finder is swept in a circular arc, thus allowing the robot to make distance measurements for a wide angle around the robot. Laser range-finders have a maximum range beyond which they cannot sense objects, and they also have a fixed number of angles they can sense along, and the overall angle over which they can make observations.

Each ray of the laser range-finder thus provides a measurement of the distance to the closest obstacle in that direction. This measurement inherently has errors, due to the variations in the optical properties of the obstacle, errors in time measurements from the electronics in the sensor, and dust particles in the air. The demo below simulates multiple readings for a single ray of a laser range-finder, when the actual distance to the obstacle is 10 meters. The demo graphs the distribution of the number of times the sensor makes a particular observation. Note that the actual observations vary, and their distribution is spread out, but the peak is centered around the true reading. The average over all the observations, despite the noisy nature of the observations, tends towards the expected value. Try changing the magnitude of the sensor noise, and the number of observations: note how the average sensor reading more closely matches the true distance when the sensor noise is low, and when the number of observations is high.

Despite the errors in individual readings of the sensor, the average error over a large number of readings tends to zero as the number of readings increases, and the distributions of the readings follow a known pattern, modelled by a Normal Distribution,.

Simulating Robot Sensor Model

The robot's sensor model is responsible for incorporating the robot's sensor readings. The readings might be an image taken from a camera, data returned by a laser scanner, data gathered by ultrasonic sensors, etc. Whatever the data is, sensor models are supposed to analysis them and give a judgement of where the robot is. In other words, sensor models are robot's perception of the environment around it. Just like humans look around and figure out where they are, robots do the same: with sensor models. There is one important precondition, the sensor model must have a map of the environment.

Robot's sensors have noises, meaning that measurements are not exactly accurate. The robot's internal model must be aware of this when estimating its location. Click to place the robot, hold and move to change direction of robot. You can also use the Mini Map on the right to move the area of the map that you want to have a look at.

Just like it was mentioned before, the sensor model can only estimate the position of the robot due to inaccuracy. The sensor model will calculate the probability of the robot at a given position base on its sensor readings. Click color the map to calculate the probability of the robot in the map with density in pixels.

m
m
pixels

Inherently Unpredictable Environment

Let's take self driving car as an example. When it is moving, it is impossible for it to predict the weather conditions, motion dynamics of pedestrians and other cars, traffic lights, or car accidents that is not given in the map.

Model Limitations

Some uncertainty is caused by the robot’s software. All internal models of the world are approximate. Models are abstractions of the real world. As such, they only partially model the underlying physical processes of the robot and its environment.

Probabilistic Robotics

Probabilistic robotics is a relatively new approach to robotics that pays tribute to the uncertainty in robot perception and action. The key idea in probabilistic robotics is to represent information by probability distributions over a whole space of guesses instead of a single best guess. By doing so, they can represent ambiguity and degree of belief in a mathematically sound way.

Unsplashed background img 3

Monte Carlo Localization

Given that all actuators and sensors have some degrees of noise, how should we use these actuators and sensors? Over time, people have developed algorithms and mathematical models to take these uncertainties into account. Monte Carlo Localization (MCL) is an example of such algorithms.

The MCL algorithm can be applied to robots with the following conditions

  • The robot is in a known area, which means the robot has the map of the environment.
  • The robot has some kinds of sensors.
  • The robot can move around freely.
Trivia: Amazon uses such robots. Check this out.

The MCL algorithms works by maintaining many possible locations of the robot. As the robot moves and senses, MCL incorporates additional information gathered and updates each of its possible locations. The possible locations that seem to be consistent with the robot's motion and measurement data will be kept, vice versa.

Each guess(also called hypothesis) is represented by a "particle" consists of location(such as x, y coordinates) of the robot and direction it's facing. Each particle is also associated with a weight, which represents how likely the particle represents the robot's true location(and direction).

Overview of MCL at each time step:

  1. If the robot moves, the algorithm update each particle's location and direction according to the robot's motion(robots often have internal motion sensors that can tell how much the robot has moved approximately). When updating, uncertainties of motion readings must be taken into account
  2. If the robot performed a scan of the environment, update each particle's weight according to the sensor readings. The new weight should be associated with how likely the robot is at this location given the scanned data.
  3. Create a new set of particles with equal size by selecting particles from the old set according to the weights of the particles. Particles with higher weights are more likely to be selected. This is called the resampling process.
  4. If all particles are clustered around a small region, that should be a good bet of where the robot actually is.

There are many applications of such algorithm. For example, warehouse robots that deliver objects inside a building. The robot would need to keep track of where it is when moving and adjust its motion accordingly if it wanders off path. GPS is not a good choice since most civilian GPS devices are only accurate to about 16 ft under open sky. The accuracy of GPS is even worse inside a building due to signal blockage.

Common Q&A

Why do we need to take uncertainties of odometry readings into account?

Noise exists in robots' motion sensors. If the motion sensor reports a movement of 5 meters, the actual distance moved may or may not be exactly 5. It is probably close to 5, say 5.1, 4.9, 4.8, etc. The accuracy of motion sensors determines how far the readings deviate from reality. In practice, it is turns out that taking these noises into account produces better results.

How do we take uncertainties of odometry into account?

The mathematical model should be aware of such uncertainties. The mathematical model to handle is is often called a motion model. When updating particles according to the robot's motion, the motion model draws a sample from all possible outcomes. For example, if the robot's odometry sensor reports a forward movement of 10 meters, the motion model would draw a sample according the patter of noise, for example, 9.8m, 10.3m, 10.1m, etc. But it is extremely unlikely to draw a sample of 3m. Because the motion model knows that the robot is not that inaccurate.

Why do we need to resample? Why don't we just keep the best ones?

The weights of particles represent the robot's judgement of how likely that particle is the true location. Just because the robot think a particle is a good guess, doesn't mean that particle is actually good. If you only keep the best one and drop the rest, it is very likely that you are throwing away the good particles. By performing a resampling process, we are kind of maintaining a natural selection process. And the hope is that, some of the good particles will survive and propagate, giving us a good result.