Lab 10: Grid Localization using Bayes Filter

Objective

The goal of this lab was to implement grid-based localization using the Bayes filter algorithm. A robot operating in a bounded 2D environment estimated its position and orientation by incorporating noisy odometry and sensor data to iteratively update a belief distribution over a discretized grid of possible states.

Bayes Filter Summary

The Bayes filter is a probabilistic algorithm used to estimate a robot’s position by combining uncertain motion and sensor data. It maintains a belief distribution over possible states and updates this belief in two steps:

By repeating these steps, the Bayes filter can accurately localize a robot even with noisy data and incomplete information.

Pre-lab

I set up the simulator and completed two tasks: open-loop and closed-loop control.

Open Loop Control

I programmed the robot to follow a set of velocity commands and trace a square loop. The robot's odometry and ground truth path were plotted for comparison. As expected, open-loop control was inconsistent—small odometry errors accumulated, causing the robot to deviate from the intended shape.

compute_control function diagram

Closed Loop Control

I implemented a simple controller for obstacle avoidance. The robot rotated in place at 1.2 rad/s when an object was detected within 0.4 meters, and otherwise moved forward at 0.2 m/s. While this worked well in open areas, it sometimes failed in corners or tight spaces due to having only one front-facing sensor. Adding a backup behavior or side sensors would improve reliability.

compute_control function diagram

Lab Tasks

The main task was to perform grid localization on the sample trajectory provided in the notebook using the Bayes filter framework.

compute_control

This function breaks down motion between two poses into: delta_rot_1 (initial turn), delta_trans (straight-line travel), and delta_rot_2 (final turn). These are returned in degrees and meters to be used in the prediction step.

compute_control function diagram

odom_motion_model

This function estimates how likely it is that the robot moved from a previous to a current pose, given a control input. It compares estimated and commanded motions using Gaussian distributions and returns the combined probability.

odom_motion_model function diagram

prediction_step

This step updates the prior belief using the odometry motion model. It considers transitions between every pair of grid states and uses motion probabilities to compute the new belief estimate, which is then normalized.

prediction_step diagram

sensor_model

This function calculates the likelihood of each of 18 individual sensor readings by comparing actual vs. expected values at a pose, using Gaussian noise. The result is an array of 18 likelihoods.

sensor_model diagram

update_step

In this step, the predicted belief is refined using sensor data. Each grid cell’s likelihood is calculated using the sensor model and multiplied with its prior belief, then normalized to produce the updated belief.

update_step diagram

Results

Video:

Plot:

Odometry, Belief, and Ground Truth paths Odometry, Belief, and Ground Truth paths

The odometry-only path (red) was highly inaccurate and drifted significantly. The belief (blue), however, closely followed the true path (green) for most of the run. Even when the belief deviated—especially during turns or at the center of the grid—it quickly corrected itself. This demonstrates the Bayes filter’s ability to localize accurately despite poor odometry.

Discussion

The Bayes filter outperformed odometry alone by quickly correcting drift using sensor data. Compared to the Kalman filter, it handled nonlinearity and discrete states better, making it more reliable for accurate robot localization.


Back to Main Page