Today's goal: This lab has two main objectives: (1) using ARtags which are landmarks that we can place in the environment. (2) using sounds and text-to-speech so that your robot can talk back.
Part 1: Using IDENTITY Tags
OK, so robot vision is hard. Let's introduce some position markers into the world to help ourselves out.
We're going to be using the AR_Track_Alvar tag library. Read more about it here: http://wiki.ros.org/ar_track_alvar (Links to an external site.).
Your robot should already have this library installed.
[[ If not, then one person/team per robot will need to install this library. From the turtlebot user, run: sudo apt-get install ros-indigo-ar-track-alvar ]]
Now if you run
roscd ar_track_alvar, you should be redirected to the ar_track_alvar directory.
Take a look at the content of the launch file in your starter code,
ar_tags.launch.xml. This launches 3dsensor.launch and sets up the AR tag tracking.
OK, now if you run:
and if you run:
you should see the ros topic /ar_pose_marker appear.
Let's have a look at that topic. Run
rostopic echo /ar_pose_marker
and look at the output without any markers in the scene. Then, come grab one of the tags, and check out the output!
The topic gives you AlvarMarkers messages. Read that documentation to understand the contents of each AlvarMarker in the message. In particular, notice that you can get the tag's ID (0-17 today) and pose. This pose has a geometry_msgs/PoseStamped type.
We'll deal with orientation later (you'll need it for the final project). For now, we'll use the IDs and positions.
Your goals for this are to:
- When a tag is seen, calculate how far away it is from the robot.
- Save the tag IDs and distances for use outside of the process_ar_tags method.
- Remember the tag IDs even after you stop seeing them.
- Create a method that prints out these distances and all the seen tags in your main loop. (In the future, you could use a method here that would do more with the tag information.)
Things to consider:
- What makes the tags easier or harder to see?
- How do you handle false positives of AR tag identification?
Part 2: Making sounds and text-to-speech
Let's get these robots speaking!
First, let's make sure audio_common is installed. From the turtlebot user, run
sudo apt-get install ros-indigo-audio-common
(It might already be installed.)
For this lab, you can reuse your lab 1 directory and codebase. Next, launch the sound_play node on top of all other launches (you may add this to any launchfile!):
roslaunch sound_play soundplay_node.launch
Add the following import statements:
from kobuki_msgs.msg import Sound
from sound_play.msg import SoundRequest
from sound_play.libsoundplay import SoundClient
Now, in your initialization, add:
self.soundhandle = SoundClient()
And create a publisher for the sound topic:
self.beep = rospy.Publisher('/mobile_base/commands/sound', Sound, queue_size=10)
You can now make sounds using
Can you notice any difference between the two options?
Finally we can also get the machines talking via the laptop's speakers, using:
Final goal: Having explained how to use sounds and text-to-speech, take your lab-1 solutions for driving squares, make a sound before each turn and let the robot inform you about each completed square.
You can connect external speakers to your turtlebot-laptops via bluetooth to make the robot louder.