Today's Goal: As a robot moves through the world, it can keep track of its relative position by integrating its actions (motor commands) with its sensors (wheel encoders and accelerometer) using a Kalman filter approach. In this Lab we will use a ROS library package to help keep track of robot position over time.
Start your repository
Part 1: Localization
First, look at the
robot_pose_ekf.launch.xml file. This starts the EKF node and configures the default values. The package can use data from the wheel encoders (
odom_used parameter), IMU (
imu_used), and camera (
vo_used). By integrating position estimates from different sources and modalities, the EKF can reduce the uncertainty of its pose estimates. Also check out
navigation.launch.xml to see how the EKF launch file is included and can be re-configured. You will only have to run
Next, let's look at
localization.py in your repository's starter code to see how we can use
robot_pose_ekf to determine the robot's position and orientation. We're starting from the Lab 1 square code and adding on.
In the node's constructor method, there are two important pieces we've added. First, we subscribe to the
robot_pose_ekf/odom_combined topic, which gives us
geometery_msgs/PoseWithCovarianceStamped objects to process. You can read more about the
robot_pose_ekf package here. Notice that the pose is sent as a quaternion. In the
process_ekf method, we convert this to an Euler angle and ignore everything except the angle in the floor plane, since the turtlebot only moves in 2D.
Second, we reset the odometry for the base. This sets the current position to (0, 0) and, importantly, resets the uncertainty for the EKF.
In a group of 4 with the other pair on your robot, run the code and look at what happens. The node will print its position and orientation to the console after each side. Mark the robot's position each time the robot finishes a side. What happens to the estimates, as well as the robot's position, over time?
Part 2: Homing
Now that we see how localization with EKF is working, let's use that information to make the robot go back to its home position (0, 0) when the bumper is pressed. To do this, the robot needs to (1) re-orient itself toward the home position, and (2) move straight home.
That means first figuring out the angle it needs to turn to head home. You can figure out the geometry of this from a right triangle, but we'll let you off the hook and tell you how to do that:
The robot can continue to turn until its orientation matches the desired position (simple closed loop control).
After orienting, move the right distance in a straight line to reach the origin position. (We won't worry yet about making adjustments during this movement.)
Part 3: Waypoint Navigation
If you're running out of time to implement this, look at
waypoint_nav.py in the solution repository so you at least have a chance to compare this to the open loop version.
As you saw in Part 1, your robot movement can be pretty inaccurate. When heading home, if you set your robot's heading and go, you'll probably miss the home position. That means you have to close the loop! As your robot orients itself and heads home, you'll need to adjust your orientation to stay in the right direction. (Proportional control is also a big benefit here.)
The turtlebot is now using the origin as a waypoint to navigate to. You can also draw the entire square with waypoints like this. Just like going to the "home" in Part 2, we can set each of the corners of the square as a waypoint for the robot to go to sequentially.
Compare the accuracy of waypoint navigation to the open loop squares in Part 1. How much does the localization drift over time? How does drift change when moving straight compared to turning? How does the accuracy of the estimates change if you include or exclude the different sensors used in the EKF (IMU, odometry, and camera)?