Today's Goal: The objective of this lab is to use the robot camera. The Turtlebot has a special camera that provides both color AND depth. We will be using the OpenCV image processing library quite extensively.
Part 0: Getting started
We provide you with a launch file and two scripts
view_images.py. The first of these will show you a depth image as a grayscale picture, and the second will show you a color image ("BGR" image).
In order to run the scripts, you will need to run the launch file we provide, which starts the ROS nodes corresponding to the ASUS camera.
Part 1a: Using the depth camera
We provide you with a script entitled
Read through the code first.
Ask TFs any questions you have about how the code is working. Also consult the openCV library to see what different functions do.
Run this script on the turtlebot directly, together with the other team on your robot. When you run this script, you will see a visual of the depth information. (Hint: using the visualization over SSH is very slow, so for debugging you should run the script with visualization on the Turtlebot computer directly.)
Part 1b: Thresholding the Depth Image (relevant to pset 2)
In Pset 2 you will be using the depth camera to avoid obstacles in the room and safely wander - similar to what you did in Pset 1 but using the depth camera rather than bumpers. Here you will write some code to threshold the depth image to find only the close by obstacles. The places you need to add and edit code are noted with a
Part 2: Using the color camera. (relevant for Pset 3)
Read through the
Ask TFs any questions you have about how the code is working. Also consult the OpenCV library to see what different functions do.
Run this script on the Turtlebot directly. When you run this script, you will see the color image that the robot sees. (Hint: using the visualization over SSH is very slow, so for debugging it can be useful to run script with visualization on the Turtlebot computer directly.)
Your assignment is to threshold this image to only show blue objects.
You will need to:
- Convert the BGR image to HSV colorspace (hue-saturation-value). You can use the function
cvtColor() to convert the image's colorspace. This is not necessary but makes it easier to threshold based on color ("hue") without being affected by lighting. (NOTE: OpenCV uses BGR, not RGB, by default)
- Apply a blur to make the colors more uniform. An example cv2 function is
- Use the cv2 function
inRange() to threshold the image. You'll need to supply a lower and an upper bound; look up HSV values and see how to specify a range for blue. This will create a mask you can apply to your original image.
- In order to display the blue objects in the image, you'll need to apply a bitwise and operator to the original image as well as the newly thresholded image. The function to do so is called
Part 3: Optional Extensions (relevant for Pset 3)
For your third and fourth psets, you'll likely need to use bounding boxes to help you identify objects in a scene. Using the code you've written to threshold the image, you should now try to draw a bounding box around the largest blue object in the scene.
- Find the all the contours in your thresholded image. You may use the function
findContours() to do so.
- Find the largest contour. To find the contour's area, you may use the function
contourArea() to do so.
- Draw a bounding rectangle. You may use the function
boundingRect() to do so.
An analogous tutorial is available here: OpenCV Bounding Box Tutorial.
If you are done with the lab ahead of time, feel free to look through the following OpenCV functions:
It may be helpful to save BGR and depth images to work on code when away from the Turtlebots. To do so, the OpenCV functions imread() and imwrite() will be helpful.
Also note that the code we're providing imports some libraries which we don't use (e.g.
Twist()). This is so you can use this as a template for jumping into Pset 2 if you like.
There's also some weirdness with cv_bridges; this is to get the image into the desired OpenCV format in place of the ROS image format. Additionally, you'll notice the subscriber has more arguments than we used when subscribing to the Kobuki base sensors; the buffer_size argument safeguards against the subscriber receiving massive amounts of data all at once.
Lastly, we've selected a ROS topic to subscribe to which corresponds to a BGR image and a depth image. However, this data is available in a number of different forms! You can see what other topics exist by running:
You can then check to see if the topics have data by running:
rostopic echo <<TOPIC_NAME>>
And, lastly, you can check the object type of a topic by running:
rostopic type <<TOPIC_NAME>>