Gazebo image topics
- Terminal1 (gazebo turtlebot)
roslaunch turtlebot_gazebo turtlebot_world.launch
- Terminal2
rostopic list
- depth camera
- rgb camera
- raw: raw image
- compress: compress image
- theora: compress video
ROS sent image useing sensor_msgs/Image
Follower node
- subscribe to /camera/rgb/image_raw topic receive Image msg
#!/usr/bin/env python
import rospy
from sensor_msgs.msg import Image
def image_callback(msg):
pass
def main():
rospy.init_node("follower")
rospy.loginfo("start node")
sub = rospy.Subscriber("/camera/rgb/image_raw", Image, image_callback)
while not rospy.is_shutdown():
rospy.spin()
if __name__ == "__main__":
main()
- Set node permission (
chmod +x
) - Run in separated Terminal
rosrun <package_name> follower.py
- Check that the node is running
- Get node info with
rosnode info /follower
- Check message receive interval
rostopic hz /camera/rgb/image_raw
subscribed to [/camera/rgb/image_raw]
WARNING: may be using simulated time
average rate: 30.208
min: 0.010s max: 0.060s std dev: 0.01642s window: 30
average rate: 28.061
min: 0.010s max: 0.060s std dev: 0.01638s window: 56
Setup follow the line
launch file and projects structure
- launch file
- worlds
- gazebo resource
- materials
├── launch
│ └── follow.launch
├── materials
│ ├── course.material
│ └── course.png
├── models
├── scripts
│ └── follower.py
├── src
└── worlds
└── course.world
<?xml version="1.0"?>
<launch>
<env name="GAZEBO_RESOURCE_PATH" value="$(find follow_bot)/materials:$(optenv GAZEBO_RESOURCE_PATH)" />
<include file="$(find gazebo_ros)/launch/empty_world.launch">
<arg name="world_name" value="$(find follow_bot)/worlds/course.world"/>
</include>
<include file="$(find turtlebot_gazebo)/launch/includes/kobuki.launch.xml">
<arg name="base" value="kobuki"/>
<arg name="stacks" value="hexagons"/>
<arg name="3d_sensor" value="kinect"/>
</include>
</launch>
Note: launch file set GAZEBO_RESOURCE_PATH
environment variable using env
tag
View image
#!/usr/bin/env python
import rospy
from sensor_msgs.msg import Image
import cv2
import cv_bridge
class Follower(object):
def __init__(self):
self._bridge = cv_bridge.CvBridge()
sub = rospy.Subscriber("/camera/rgb/image_raw", Image, self.image_callback)
def image_callback(self, msg):
image = self._bridge.imgmsg_to_cv2(msg,desired_encoding='bgr8')
cv2.imshow("window", image)
cv2.waitKey(3)
def on_shutdown():
rospy.loginfo("shutdown")
cv2.destroyAllWindows()
def main():
rospy.init_node("follower")
rospy.loginfo("start node")
rospy.on_shutdown(on_shutdown)
while not rospy.is_shutdown():
follower = Follower()
rospy.spin()
if __name__ == "__main__":
main()
Note: When we use the cv2.namedWindow
Method the Window was freeze
check the line solution
Detecting the line
- Find the yellow line
lower_yellow = numpy.array([ 50, 50, 170])
upper_yellow = numpy.array([255, 255, 190])
mask = cv2.inRange(hsv, lower_yellow, upper_yellow)
- left: orignal
- center: mask imaged (binary)
- right: masked image (extrat only the line from orignal image)
Find line center
- Mask image and take only relevant part (closer path front to the robot)
- Find line center with cv2 moments method
- For debug draw circle on line centroid
M = cv2.moments(mask)
if M['m00'] > 0:
cx = int(M['m10']/M['m00'])
cy = int(M['m01']/M['m00'])
cv2.circle(image, (cx, cy), 20, (0,0,255), -1)
- Public `Twist` message , calc z angular value
```python
#delta/ error between line center to image center
delta = cx - w/2
twist = Twist()
twist.linear.x = 0.2
twist.angular.z = -float(delta) / 100
self._cmd_vel_pub.publish(twist)
No comments:
Post a Comment