I want to rotate a point (x,y) from an orientation alpha to a new point(x,y) with an orientation beta about a fixed point (centre of the circle). How to do it in ROS ?
↧
how to rotate a point(x,y) about a fixed point
↧
viso2 tf_tree problem
Hello to everybody.
I want to use the mono odometer from vo for Odometry in the robot_localization.
The first step which is allready not working is the vo.
I always get the following error message:
> [ WARN] [1455205411.959462457]: The tf from 'base_link' to 'camera' does not seem to be available, will assume it as identity!
I allready looked for 2 days for a solution but i just found out that i have to use a static what i allready did.
I use a static transform for the odom->base_link
my launchfile looks linke this:
>
i need the tranform for moving the camera in the right direction. if i dont use the tf and the identity is used i can only use the VO with a pointing to the ceeling camera.
maybe im missing something or not seeing something.
thanks for any help.
↧
↧
How to transform between the quaternions ( or ERYs)
I know I can transform my quaternion to ERY, and it seems easier to transform between ERYs because it's more direct. For example, I have a vector X (x,y,z) and its ERY is (0,0,a), and I have another vector Y(x',y',z') and ERY is (0,0,b), if I want to get Y by doing RX+T, can I get the rotation matrix R and transformation matrix T by using TF's function? Or if its not possible, how can I compute this? Thx a lot!
↧
Confusing tf transforms
I have been working a lot with transform matrices these days and the tf nomenclature doesn't make things easier.
For tf_echo, the input frames are defined as such:
tf_echo
When using Python and looking up for a transform it is as follows:
lookupTransform(target_frame, source_frame, time) -> position, quaternion
But what I get are two inverse transforms even though I use the same source_frame and target_frame. Why is that?
↧
ar_pose : interprete output of 'ar_pose_marker' && transform
Hello.
I want to track a object that is moving inside a room and color the pixel in the cam-image, where the object has moved to. Instead of detect colors etc. I want to do this with markers from ar_pose.
My setup is a simple web_cam and one printed marker.
I can start tracking and I do get values on the topic "ar_pose_marker". My problem is that I do not understand what the values "ar_pose_marker" is publishing to me.
Yes, I did read that "The current pose of the marker relative to the camera." My interpretation is that the cam is located at (0,0.0) and the values are the offset to that origin (in world coordinates). Is that correct?
One example:
Lets say my setup is running with a 640x480px image.
When i move the marker to the most left: {ar_pose_marker}.x = -1.16
The the rightmost place {ar_pose_marker}.x = -0.75
This values change when i increase/decrease the distance: marker<->camera. Whay?
* What does the values ^^ tell me?
* How do I transform a detected marker position to a specific pixel position in the cam-image?
Thanks for any hints
↧
↧
Gmapping tf problem
Hi,
We are having problems with Gmapping. We are using a OMD10M-R2000 range finder and are getting odometry via RS232 from the robot. In RViz we get the following error: `No transform from [] to [map]`

when we start Gmapping with following launch file we get a warning: `[ WARN] [1461833120.053747916]: MessageFilter [target=odom ]: Dropped 100.00% of messages so far. Please turn the [ros.gmapping.message_notifier] rosconsole logger to DEBUG for more information.`
launch file: [EDIT: old launch file](/upfiles/14623478332389917.png)
Our tf tree looks like this: `map->odom->base_link`
[EDIT: old tf tree](/upfiles/14618352031649242.png)

Our rqt_graph:
[EDIT: old rqt_graph](/upfiles/1461835474967964.png)

Greetings,
Stijn
[EDIT]
URDF file:
↧
Why transform has to be broadcasted periodically?
Hello,
Nowadays I am trying to build a mobile robot with several sensors, and be curious about using relative positions of sensors in ROS.
In several tutorials, every transform(in my case, sensors) must be broadcasted periodically even if they are static in the whole robot configuration; Assuming that the base link and the relative position of sensors will not be changed.
Is there any specific reasons to this? I couldn't find any documents or explanation to this.
I think it is kinda waste of resources.
Thank you in advance.
↧
How is the orientation of tf /odom initialized?
What determines the orientation of the tf frame /odom when it is initialized?
On the page for GPS integration, [here](http://wiki.ros.org/robot_localization/Tutorials/GPS%20Integration). It says:
"The odom and map frames are world-fixed frames and generally have their origins at the vehicle's start **position** *and* **orientation**."
However I have personally found that while the position of /odom and /base_link match at startup, the orientation doesn't. In the image below all I have done is move the robot directly forward, and you can see that it's orientation (/base_link) does not match that of /odom.

So, if it isn't being initialized to match /base_link, how is it being initialized? Or is this a bug?
↧
rtabmap no work (only xtion) -- odometry : could not get transform from camera_link to camera_rgb_optical_frame..
hello .. i follow rtabmap-ros tutorial.
but program dont work.
http://wiki.ros.org/rtabmap_ros/Tutorials/HandHeldMapping
i only use xtion (openni2). no odmoety no laser.
and i command line
roscore
roslaunch openni2_launch openni2.launch depth_registration:=true
roslaunch rtabmap_ros rgbd_mapping.launch rtabmap_args:="--delete_db_on_start"
output is bottom and gui is black
i think this is problem.
[ WARN] [1464615244.777627115]: odometry: Could not get transform from camera_link to camera_rgb_optical_frame (stamp=1464615244.512744) after 0.200000 seconds ("wait_for_transform_duration"=0.200000)!
i check
rosrun rqt_tf_tree rqt_tf_tree
camera_link -->camera_rgb_frame-->camera_rgb_optical_frame
--> camera_depth_frame -->camera_depth_optical_frame
i watch simillar question . they have odom --> base_link_-->camera_link..
so i add static tf broadcaster node(base_link -->camera_link) but no work..
and change launch file (frame_id cmaera_link -->base_link)
http://official-rtab-map-forum.67519.x6.nabble.com/RTAB-Map-Viz-does-not-display-anything-td651.html
$ rosrun tf tf_monitor
Frames:
Frame: /camera_depth_frame published by unknown_publisher Average Delay: -0.0993342 Max Delay: 0
Frame: /camera_depth_optical_frame published by unknown_publisher Average Delay: -0.0989423 Max Delay: 0
Frame: /camera_rgb_frame published by unknown_publisher Average Delay: -0.0991952 Max Delay: 0
Frame: /camera_rgb_optical_frame published by unknown_publisher Average Delay: -0.0985974 Max Delay: 0
///////////////////////////////////////////////////////////////////////////////////////////////
* /rosdistro: indigo
* /rosversion: 1.11.10
* /rtabmap/rgbd_odometry/Odom/FillInfoData: true
* /rtabmap/rgbd_odometry/frame_id: camera_link
* /rtabmap/rgbd_odometry/wait_for_transform_duration: 0.2
* /rtabmap/rtabmap/Mem/IncrementalMemory: true
* /rtabmap/rtabmap/Mem/InitWMWithAllNodes: false
* /rtabmap/rtabmap/Mem/SaveDepth16Format: true
* /rtabmap/rtabmap/RGBD/OptimizeFromGraphEnd: false
* /rtabmap/rtabmap/Rtabmap/TimeThr: 0
* /rtabmap/rtabmap/database_path: ~/.ros/rtabmap.db
* /rtabmap/rtabmap/frame_id: camera_link
* /rtabmap/rtabmap/subscribe_depth: True
* /rtabmap/rtabmap/subscribe_scan: False
* /rtabmap/rtabmap/subscribe_scan_cloud: False
* /rtabmap/rtabmap/wait_for_transform_duration: 0.2
* /rtabmap/rtabmapviz/frame_id: camera_link
* /rtabmap/rtabmapviz/subscribe_depth: True
* /rtabmap/rtabmapviz/subscribe_odom_info: True
* /rtabmap/rtabmapviz/subscribe_scan: False
* /rtabmap/rtabmapviz/subscribe_scan_cloud: False
* /rtabmap/rtabmapviz/wait_for_transform_duration: 0.2
NODES
/rtabmap/
rgbd_odometry (rtabmap_ros/rgbd_odometry)
rtabmap (rtabmap_ros/rtabmap)
rtabmapviz (rtabmap_ros/rtabmapviz)
ROS_MASTER_URI=http://localhost:11311
core service [/rosout] found
process[rtabmap/rgbd_odometry-1]: started with pid [34941]
process[rtabmap/rtabmap-2]: started with pid [34942]
process[rtabmap/rtabmapviz-3]: started with pid [34943]
[ INFO] [1464615240.954724732]: Starting node...
[ INFO] [1464615241.074584475]: Setting odometry parameter "Odom/FillInfoData"="true"
[ INFO] [1464615241.281704904]: Starting node...
[ INFO] [1464615241.384123291]: rtabmap: frame_id = camera_link
[ INFO] [1464615241.384244674]: rtabmap: map_frame_id = map
[ INFO] [1464615241.384285855]: rtabmap: queue_size = 10
[ INFO] [1464615241.384331464]: rtabmap: tf_delay = 0.050000
[ INFO] [1464615241.384361071]: rtabmap: depth_cameras = 1
[ INFO] [1464615241.636235974]: rtabmapviz: Using configuration from "/opt/ros/indigo/share/rtabmap_ros/launch/config/rgbd_gui.ini"
[ INFO] [1464615242.353012125]: Setting RTAB-Map parameter "Mem/IncrementalMemory"="true"
[ INFO] [1464615242.355861361]: Setting RTAB-Map parameter "Mem/InitWMWithAllNodes"="false"
[ INFO] [1464615242.539713960]: Setting RTAB-Map parameter "Mem/RehearsalSimilarity"="0.45"
[ INFO] [1464615242.564056420]: Setting RTAB-Map parameter "Mem/SaveDepth16Format"="true"
[ INFO] [1464615243.052447762]: Setting RTAB-Map parameter "RGBD/AngularUpdate"="0.01"
[ INFO] [1464615243.077966419]: Setting RTAB-Map parameter "RGBD/LinearUpdate"="0.01"
[ INFO] [1464615243.225300360]: Setting RTAB-Map parameter "RGBD/OptimizeFromGraphEnd"="false"
[ INFO] [1464615243.283571878]: Reading parameters from the ROS server...
[ INFO] [1464615243.754540264]:
/rtabmap/rgbd_odometry subscribed to:
/camera/rgb/image_rect_color,
/camera/depth_registered/image_raw,
/camera/rgb/camera_info
[ INFO] [1464615243.876189639]: Setting RTAB-Map parameter "Rtabmap/TimeThr"="0"
[ INFO] [1464615244.215550331]: Parameters read = 0
[ WARN] [1464615244.777627115]: odometry: Could not get transform from camera_link to camera_rgb_optical_frame (stamp=1464615244.512744) after 0.200000 seconds ("wait_for_transform_duration"=0.200000)!
[ WARN] [1464615244.997129823]: odometry: Could not get transform from camera_link to camera_rgb_optical_frame (stamp=1464615244.732423) after 0.200000 seconds ("wait_for_transform_duration"=0.200000)!
[ WARN] [1464615245.209822131]: odometry: Could not get transform from camera_link to camera_rgb_optical_frame (stamp=1464615244.951686) after 0.200000 seconds ("wait_for_transform_duration"=0.200000)!
[ WARN] [1464615245.422359651]: odometry: Could not get transform from camera_link to camera_rgb_optical_frame (stamp=1464615245.165515) after 0.200000 seconds ("wait_for_transform_duration"=0.200000)!
[ INFO] [1464615245.468579608]:
/rtabmap/rtabmapviz subscribed to:
/camera/rgb/image_rect_color,
/camera/depth_registered/image_raw,
/camera/rgb/camera_info,
/rtabmap/odom,
/rtabmap/odom_info
[ INFO] [1464615245.614625798]: rtabmapviz started.
[ WARN] [1464615245.631537825]: odometry: Could not get transform from camera_link to camera_rgb_optical_frame (stamp=1464615245.386058) after 0.200000 seconds ("wait_for_transform_duration"=0.200000)!
[ INFO] [1464615245.812814503]: RTAB-Map detection rate = 1.000000 Hz
[ INFO] [1464615245.813110989]: rtabmap: Deleted database "/home/jun/.ros/rtabmap.db" (--delete_db_on_start is set).
[ INFO] [1464615245.813173078]: rtabmap: Using database from "/home/jun/.ros/rtabmap.db".
[ WARN] [1464615245.842446177]: odometry: Could not get transform from camera_link to camera_rgb_optical_frame (stamp=1464615245.599687) after 0.200000 seconds ("wait_for_transform_duration"=0.200000)!
[ INFO] [1464615246.053931765]: rtabmap: Database version = "0.11.5".
[ WARN] [1464615246.065416532]: odometry: Could not get transform from camera_link to camera_rgb_optical_frame (stamp=1464615245.822337) after 0.200000 seconds ("wait_for_transform_duration"=0.200000)!
[ WARN] [1464615246.298395339]: odometry: Could not get transform from camera_link to camera_rgb_optical_frame (stamp=1464615246.034815) after 0.200000 seconds ("wait_for_transform_duration"=0.200000)!
[ WARN] [1464615246.516898130]: odometry: Could not get transform from camera_link to camera_rgb_optical_frame (stamp=1464615246.267376) after 0.200000 seconds ("wait_for_transform_duration"=0.200000)!
[ WARN] [1464615246.730979917]: odometry: Could not get transform from camera_link to camera_rgb_optical_frame (stamp=1464615246.467575) after 0.200000 seconds ("wait_for_transform_duration"=0.200000)!
↧
↧
rtabmap_ros how to set odometry and camera?
Hello
i try to xtion, kobuki(2 wheel mobile robot).
i read http://wiki.ros.org/rtabmap_ros/Tutorials/SetupOnYourRobot.
and apply my launch.
i modify [rgbd_mapping.launch](https://github.com/introlab/rtabmap_ros/blob/master/launch/rgbd_mapping.launch).and program work normally.
but result is strange.
first. set my xtion from robot ( 0.11cm, 0, 0.16cm) so i add tf broadcaster.
result is below link.
https://plus.google.com/110617758145783258429/posts/RCTNymgamWp
1.point cloud and map cloud is different.
is the result normal??
2.map is bad when robot turn a round and go to origin.
https://plus.google.com/110617758145783258429/posts/KcxDVQqxNyg
full launch file.
** **
↧
How to change the co-ordinates of the velodyne point cloud data?
 Hi,
I have three velodynes and I launch all the three using group. I get the point cloud but all of them have the same fixed frame as 'velodyne' and when I visualize it in rviz, all the point clouds have the same origin.
I tried using tf package
but ony the axis change but not the point clouds.
Can please tell me what I doing wrong and suggest a solution?
Thanks in advance
↧
more than one odom->base_link transform
Hi,
I have 2 nodes from third parts, both of them publish odom->base_link transform. I want to combine these 2 transforms and publish my own odom->base_link transform, since our navigation node needs this transform. If I can't modify the source code of all the nodes except my own fusion node, how can I make the whole system work on ROS? I mean combining the 2 input odometry and publish my own odom->base_link transform for navigation node?
I notice there was a discussion about similar topics. But it didn't provide an answer.
http://answers.ros.org/question/198041/more-than-one-odom-base_link-transform/
Thanks a lot.
Yanming
↧
I need your advice, please (planar laser to 3D using encoder)
Hello everyone,
I am using Neato XV-11 Lidar with getSurreal Lidar Controller v1.2 (Teensy 2.0). I have an Intel Edison (Flashed with Emutex's Ubilinux), and I am trying (unsuccessfully, but that's not the issue. though, tips are appreciated) to install ros_comm using
http://wiki.jigsawrenaissance.org/ROS_on_RaspberryPi
.I want to have the laser spinning (2D), have a motor spin the laser on another axis to get 3D reading and use an encoder to read the angle on the second axis which the laster is spinning.
I want to connect the encoder directly to the teensy (Should it go directly to Edison? my concern is latency) and use this Encoder library https://www.pjrc.com/teensy/td_libs_Encoder.html to grab the information from the encoder, then implement odometry using tf in order to create a point cloud.
I already have laser_assembler working, and have the laser_scan put into a buffer for 5 seconds, then have a point cloud published for rviz to read (done via a tutorial, don't have a link. you might know of it). But, of course, this just shows a point cloud of the "same" map stacked onto itself a few different times because I have no rotation being published (terminology problem?).
What do I need in order to get the information from the second axis of rotation into the point cloud? I envision the laser taking about 5 seconds to make a full turn on the second axis, then having the 3D point cloud published every 5 seconds.
I am sure there are a million holes in my plan, but that's why I come to the experts here at ROS. I'm new (+- 4 weeks) to ROS but have been studying ~40 hours a week to get this thing going. I have done the tf, catkin, and other tutorials from ROS so I do have a Beginner-Intermediate understanding of what's going on. I appreciate any tips, suggestions as to any part of my plan, or idea in general. Thank you in advance
↧
↧
tf setup for sensor on a gimbal on a robot.
I looked through the [tf setup](http://wiki.ros.org/navigation/Tutorials/RobotSetup/TF), but it only show updating tf for a fixed system at arbitrary 1Hz interval.
Anyways, I plan on using sensor information to navigate a map. So my tf tree will be something like this
/world-->/map-->/odom-->/base_link-->/gimbal-->/sensor
/word===/map???
I have a node that grabs the gimbal and odom information from the robot platform 60 times a second and publishes the information to their respective topics. Thus should I update broadcast tf information in the same node?
(Note the code is just a simple rough outline of what I'm trying to do. My actual code will probably be setup with classes so I can have tracking variables).
int main(int argc, char **argv)
{
ros::init(argc, argv, "robot");
ros::NodeHandle nh;
ros::NodeHandle nh_private("~");
MyRobotClass robot; //just some class for my robot interface
nav_msgs::Odometry odometry;
my_msg::Gimbal gimbal
// simple gimbal message with header, yaw, roll, and pitch
tf::TransformBoardcaster odom_broadcaster;
tf::TransformBoardcaster gimbal_broadcaster;
ros::Publisher gimbal_publisher;
ros::Publisher global_position_publisher;
gimbal_publisher = nh.advertise("gimbal", 10);
odometry_publisher = nh.advertise("odometry",10);
ros::Rate(60)
while(ros::ok())
{
time = ros::Time::now();
// update and publish odometry
odometry.header.frame_id = "/world";
odometry.header.stamp = current_time;
odometry.pose.pose.position.x = robot.position.x;
odometry.pose.pose.position.y = robot.position.y;
odometry.pose.pose.position.z = robot.position.z;
odometry.pose.pose.orientation.w = robot.quaternion.q0;
odometry.pose.pose.orientation.x = robot.quaternion.q1;
odometry.pose.pose.orientation.y = robot.quaternion.q2;
odometry.pose.pose.orientation.z = robot.quaternion.q3;
odometry.twist.twist.angular.x = robot.velocity.wx;
odometry.twist.twist.angular.y = robot.velocity.wy;
odometry.twist.twist.angular.z = robot.velocity.wz;
odometry.twist.twist.linear.x = robot.velocity.vx;
odometry.twist.twist.linear.y = robot.velocity.vy;
odometry.twist.twist.linear.z = robot.velocity.vz;
odometry_publisher.publish(odometry);
// update and publish gimbal
gimbal.header.frame_id = "/gimbal";
gimbal.header.stamp= current_time;
gimbal.roll = robot.gimbal.roll;
gimbal.pitch = robot.gimbal.pitch;
gimbal.yaw = robot.gimbal.yaw;
gimbal_publisher.publish(gimbal);
// update tf of odom here????
tf::Vector3 vectorOdom(robot.position.x, robot.position.y, robot.position.z);
tf::Quaternion quatOdom(robot.quaternion.q1, robot.quaternion.q2, robot.quaternion.q3, robot.quaternion.q0);
tf::Transform transformOdom(quatOdom, vectorOdom);
odom_broadcaster.sendTransform(transformOdom, time, "/map", "/base_link")
// update tf of gimbal here?
tf::Vector3 vectorGimbal(0.0, 0.0, 0.0);
tf::Quaternion quatGimbal;
quatGimbal.setEuler(robot.gimbal.yaw, robot.gimbal.pitch, robot.gimbal.roll);
tf::Transform transformGimbal(quatGimbal, vectorGimbal);
gimbal_broadcaster.sendTransform(transformGimbal, time, "/base_link", "/gimbal")
}
retrun 0;
}
Then I have a node using the camera obtain position of object in reference to the camera.
void imageCb(const sensor_msgs::ImageConstPtr& msg,const sensor_msgs::CameraInfoConstPtr& cam_info)
{
geometry_msg::PoseStamped tagPose;
//code that generates tag pose in reference to the camera in POSE_TAG_DATA
tagPose.header.frame_id = "/camera";
tagPose.header.stamp = msg->header.stamp;
tagPose.header.seq = msg->header.seq;
tagPose.pose = POSE_TAG_DATA; //pose of the tag
pose_pub.publish(tagPose);
// Need to flip axis from camera frame to match frame of gimbal? (x, y, z) will map to (y, -z, x)
tf::Vector3 vectorTag(tagPose.pose.y, tagPose.pose.z * -1, tagPose.pose.x);
// For now I just need position, but maybe orientation in the future?
tf::Quaternion quatTag (0.0, 0.0, 0.0, 1.0);
tf::Transform transformTag(quatTag, vectorTag);
camera_boardcaster.sendTransform(transformTag, msg->header.stamp, "/gimbal", "/camera");
}
int main(int argc, char **argv)
{
ros::init(argc, argv, "robot");
ros::NodeHandle nh;
ros::NodeHandle nh_private("~");
ros::Publisher pose_pub
image_transport::ImageTransport it;
image_transport::CameraSubscriber image_sub;
tf::TransformBroadcaster camera_boardcaster;
tag_publisher = nh.advertise("tagPose", 10);
image_sub = it.subscribeCamera("image_raw", 1, &imageCb);
ros::spin();
return 0;
}
I guess my question is does the tf setup look correct?
Finally I need to write Node to subscribes "tagPose" to set as navigation goal. How would tf tree help me transform the tagPose pose in the "/camera" frame to the "/map" frame? What will the basic structure of the callbackfunction for subscribing to "tagPose" and listening to tf for "/camera" to "/map" look like?
Is this, [Using Stamped datatypes with tf::MessageFilter](http://wiki.ros.org/tf/Tutorials/Using%20Stamped%20datatypes%20with%20tf%3A%3AMessageFilter) the tutorial I should follow to setup using sensor and tf frames together with stamps?
My goal is to have robot travel to the tag, but processing tag pose has a large delay of around 20-100ms. Thus I need to make sure the tag pose is transform into the world coordinates based on odom and gimbal states at the time the camera image was taken (i.e. 20-100ms in the past). My understanding is that tf should handle this with the correct setup?
↧
navigation transform information problem
Hello !
I try to run the navigation stack with a real robot (move_base + gmapping).
My problem is that when I launch my launch file which is supposed to run the move_base node and the slam_gmapping node (and load my YAML configuration files), I get completely randomly warnings which disturb the behaviour of my robot. When I send no goal, I get this warning randomly:
Costmap2DROS transform timeout ... could not get robot pose, cancelling pose reconfiguration And when I send a goal, I get this warning randomly:
Unable to get starting pose of robot, unable to create global plan When the first warning appear, the robot begins to get an hesitating behaviour with very slow speeds.
I didn't find anywhere a real solution to this problem.
If anyone could help me, I would be very grateful,
lfr
My problem is that when I launch my launch file which is supposed to run the move_base node and the slam_gmapping node (and load my YAML configuration files), I get completely randomly warnings which disturb the behaviour of my robot. When I send no goal, I get this warning randomly:
Costmap2DROS transform timeout ... could not get robot pose, cancelling pose reconfiguration And when I send a goal, I get this warning randomly:
Unable to get starting pose of robot, unable to create global plan When the first warning appear, the robot begins to get an hesitating behaviour with very slow speeds.
I didn't find anywhere a real solution to this problem.
If anyone could help me, I would be very grateful,
lfr
↧
How is tf data handled by velodyne_driver?
Hello,
I am going to use a Velodyne VPL16 along with the velodyne_driver to publish a PoinCloud2 message to ROS.
I have seen that the velodyne_node has the velodyne's frame_id as a parameter and I wanted to know if this is used just to include the frame_id in the header or if any transformation is being performed while the packages are being read from the raw data provided by the actual Velodyne.
Let me explain: if the Velodyne turns at 10Hz, it means that it takes 0.1s to perform a full revolution. If the Velodyne moves at just 20 Km/h (~5.6m/s), by the time it finishes the revolution it will have moved about 0.56m since it started.
So, if points are passed to ROS via the velodyne_points topic after having completed a 360º sweep, what odometry information should I use? The sensor's position at the beginning of the revolution, at the end, in the middle?
That's why I asked if any transformation was being performed by the velodyne_driver to compensate its own displacement.
I think it just reads the relative coordinates and publishes them converted to PointCloud2, but isn't the movement of the Velodyne introducing a big error?
↧
Find coordinates of point in a different coordinate frame
Hi,
I would like to know the easiest ROS way (tf, geometry_msgs or something of the sort) to get the coordinates of a point in another coordinate frame defined by a pose.
Let's say, I have a point w coordinates x, y, z in a global frame and I have a pose of a local coordinate frame wrt the same global frame in the form of a geometry_msgs/Pose. Now how do I find x1, y1, z1 which are the coordinates of that point in the local coordinate frame (given by that pose)
Thank you.
↧
↧
Transformation in Quaternion space for Geometry_msgs/poseStamped message
Hi,
I am new to ROS and RTT.
Given the Geometry_msgs/poseStamped message from two topics, how to calculate the transform T (Relative position) at time t1.
T^t1 = Relative_position (Pose_1^t1, pose_2^t1)
so that it can be used to locate the position of one frame from other in the same environment at time stamp t2.
Pose_1^t2 = Inverse_relative_position (T^t1, Pose_2^t2)
Precisely the calculation of Transformation equivalent to Homogeneous Transformation and using this transformation in Quaternion space to calculate the position of a point relative to a base frame.
↧
Extract robot position from rosbag
I recorded tf-messages (while driving the robot) into a bagfile and wrote a ros node that should extract the positions (base_link-frame) from that bag file.
My idea was to use a tf::Transformer and feed it with all the transforms stored in the tf-messages:
rosbag::Bag bag;
bag.open("tf.bag", rosbag::bagmode::Read);
std::vector topics;
topics.push_back("/tf");
rosbag::View view(bag, rosbag::TopicQuery(topics));
tf::Transformer transformer;
BOOST_FOREACH(rosbag::MessageInstance const m, view)
{
//instantiate bag message
tf::tfMessage::ConstPtr tfPtr = m.instantiate();
BOOST_FOREACH(geometry_msgs::TransformStamped const tfs, tfPtr->transforms)
{
tf::StampedTransform stampedTf;
tf::transformStampedMsgToTF(tfs, stampedTf);
//setTransform returns true!
transformer.setTransform(stampedTf);
...
}
}
The method setTransform(...) always returns true, so I thought that it works...
Each time I get a transform with child_frame == /robot_1/base_link I want to get the map-base_link-transform at the time of this last transform. But the transformer returns false:
if(transformer.canTransform(baseLink, mapFrame, stampedTf.stamp_)==true){
//lookup transform
transformer.lookupTransform(baseLink, mapFrame, stampedTf.stamp_, temp);
}
A few additional facts:
The transformer's latest common time is always = 0:
transformer.getLatestCommonTime(baseLink, mapFrame, time, NULL)
And printing all tf-messages of the bag shows that the tf-tree should be consistent (I'm using stage and used view_frames):
/map -> /robot_1/odom -> /robot_1/base_footprint -> /robot_1/base_link -> /robot_1/base_laser_link
Is something wrong with my code or idea?
Any suggestions for a better solution?
Thanks in advance!
↧
transform composition in tf with python
Hello
In C++, it is possible to have transform composition to determine a relative transform between two transforms like follow:
tf::Transform relative_transform = init_transform.inverse() * current_transform;
distance_moved= relative_transform.getOrigin().length();
However, I could not find similar alternative in Python.
Is there any means in Python to do a similar relative transformation like the C++ code?
Thank you
Anis
↧