For participants only. Not for public distribution.

Note #52
LIDAR, GPS, and AHRS geometry transforms

John Nagle
Last revised August 17, 2005.


An overview of how LIDAR data is converted to world coordinates.

The data sources and how they work

GPS coordinate systems

The Novatel ProPak LB GPS returns coordinates in "earth centered, earth fixed" (ECEF) coordinates. This is a coordinate system centered at the center of the earth, with the Z axis through the North Pole and the X and Y axes through the equator. The GPSINS program transforms ECEF coordinates into coordinates relative to a plane tangent to the earth's surface at a specified point. Currently, in "gpsins", the tangent point is fixed, and it is centered on a point in the driveway of the Overbot shop. In "fusednav", which will replace "gpsins" next week, the first waypoint is used to determine the tangent point. The tangent point needs to be within a few hundred kilometers of the vehicle position, since the tangent plane assumes a locally flat earth.

This transformation is done with a standard function (ECEF_dist in in gc/src/qnx/common/lib) from an autopilot software library, and yields "north, east, down" (NED) coordinates. The "north, east, down" coordinates are placed in GPSINS messages by the GPSINS program, and appear in the pos field of the GPSINS_MSG records (defined in gpsins_messaging.h from /gc/src/qnx/common/include). Those are the position values in both GPS logs and messages from the GPSINS server.

Note that these positions are measured from the GPS antenna on the back bed of the vehicle.

This data appears, at present, to be generally valid. GPS position is accurate to about 15 cm in OMNISTAR_HP mode, to about 1m in OMNISTAR mode, and to tens of meters in other modes.

With "fusednav", the position in pos is adjusted using dead-reckoning and filtering, and, rather than looking at the posStat and posType fields to determine the GPS mode, the "uncertainty" field (unc) is checked. The position reported should be within the circle defined by max(unc[0], unc[1]) almost all the time.

AHRS coordinate systems

The Crossbow Attitude and Heading Reference System (AHRS 400) provides heading and attitude information. It consists of three accelerometers, three rate gyros, a flux-gate compass, and a digital signal processor to process the data and output it on a serial port. It outputs records providing roll, pitch, and yaw, expressed as signed 16-bit values in the range -32768 .. 32767. The GPSINS program (in in gc/src/qnx/common/nav/gpsins) converts these to degrees and puts them in the rpy (roll, pitch, yaw) field of the GPSINS records. Yaw is corrected using the compass and inertial data. Pitch and roll are entirely from inertial and gravity data.

The coordinate system of the AHRS unit is defined by Crossbow as follows:

3.1 AHRS Coordinate System

The AHRS will have a label on one face illustrating the DMU coordinate system. With the connector facing you, and the mounting plate down, the axes are defined as:

X-axis from face with connector through the DMU
Y-axis along the face with connector from left to right
Z-axis along the face with the connector from top to bottom

The axes form an orthogonal right-handed coordinate system. An acceleration is positive when it is oriented towards the positive side of the coordinate axis. For example, with the AHRS sitting on a level table, it will measure zero g along the x- and y-axes and +1 g along the z-axis. Gravitational acceleration is directed downward, and this is defined as positive for the DMU z-axis. The angular rate sensors are aligned with these same axes.

The rate sensors measure angular rotation rate around a given axis. The rate measurements are labeled by the appropriate axis. The direction of a positive rotation is defined by the right-hand rule. With the thumb of your right hand pointing along the axis in a positive direction, your fingers curl around in the positive rotation direction. For example, if the AHRS is sitting on a level surface and you rotate it clockwise on that surface, this will be a positive rotation around the z-axis. The x- and y-axis rate sensors would measure zero angular rates, and the z-axis sensor would measure a positive angular rate. The magnetic sensors are aligned with the same axes definitions and sign as the linear accelerometers. Pitch is defined positive for a positive rotation around the y-axis (pitch up). Roll is defined as positive for a positive rotation around the x-axis (roll right). Yaw is defined as positive for a positive rotation around the z-axis (turn right). The angles are defined as standard Euler angles using a 3-2-1 system. To rotate from the body frame to an earth-level frame, roll first, then pitch, and then yaw.

Crosssbow AHRS axes

(from Crossbow AHRS 400 manual)

The unit is installed in the Overbot with the connector facing the rear, so roll, pitch, and yaw have their usual meanings.

Note that roll, pitch, and yaw are all zero when the vehicle is flat on the ground and pointed north.

The roll, pitch, and yaw values from the AHRS appear to be generally correct at this point.

GPSINS data is updated 20 times per second. AHRS data is updated 50 times per second. There may be trouble if the GPS timestamp is being updated on AHRS updates.

LIDAR data

LIDAR data is scanned at 75 scan lines per second, synchronized with tilt head position, and transmitted to the MAP program with a tilt angle that should be valid for the scan line. Tilt is in radians, with 0 pointing down and positive values pointing forward.

How this data is transformed

The MAP program queries the GPSINS (or fusednav) server and receives GPSINS messages as defined in gpsins_messaging.h. It must then synchronize this information with the LIDAR scan lines and tilt data.

The GPSINS messages are converted to 4x4 transformation matrices in posefromfix in in gc/src/qnx/nav/map. (All pathse here refer to CVS). This is a likely source of trouble. The key code reads as follows.

// posefromfix -- gets a pose matrix from a GPS/INS fix
void posefromfix(const GPSINSMsgRep& fix, mat4& vehpose)

// fix is NED (North, East, Down) in meters
// position is X(East), Y(North) Z(Up)
vec3 position(XYZfromNED(fix.pos)); // from NED coords
float zrot = (M_PI*0.5)-deg2radians(fix.rpy[2]); // z rotation, aimed in +X is zero // Get pose at fix
const EulerAngles angs(deg2radians(fix.rpy[0]),
deg2radians(fix.rpy[1]), zrot); // Roll, pitch, yaw
// This is the correct axis order per Crossbow AHRS 400 manual.
vehpose = translation3D(position)*vehpose; // apply translation


The call to Eul_ToHMatrix converts a set of Euler angles to a 4x4 transformation matrix. There are 24 standard ways to do this, depending on the order in which the axes are coupled. The constant EulOrdXYZr selects one of these. See in the same directory. The order, XYZ, matches the Crossbow manual. The "r" suffix means that the axes are "rotating", as opposed to "static". This may be wrong.

All the general-purpose matrix math, including the operations on vec3, and mat4 types, is in algebra3.h in gc/src/qnx/common/include.

All this gets us the vehicle position and orientation, usually referred to in the code as "vehpose". The "vehpose" matrix should transform a point in vehicle coordinates to world coordinates.

GPSINS records are timestamped, and are normally 50ms apart. We need a valid position for each LIDAR scan line, so we must do some interpolation. The actual interpolation is in interpolateposedumb in This converts pose matrices to angles, interpolates the angles, and converts them back. This uses more trig functions than necessary and has a singularity at attitudes the vehicle can never reach (i.e. pointed straight up), which is why it is called "dumb". It should be looked at. The proper solution uses quaternions and an algorithm called SLERP.

Vehicle pose management, which keeps the last few timestamped vehicle poses and will interpolate a position for any recent time, is in This calls interpolateposedumb.

LIDAR data is synchronized with GPS data in LMShandleLidarData in This obtains an interpolated vehpose matrix that matches the timestamp of the scan line.

The synchronized vehicle pose matrix and scan line are passed to LMShandlePosedLidarData. The tilt data has not yet been combined with the vehicle position data. The "tilt corrector" in is then called with the scan line data and the tilt information, and has an opportunity to modify the tilt. (As of August 17th, tilt correction is not turned on).

LMStiltPose is called to convert the tilt angle to a 4x4 transformation matrix.

const mat4 scannerpose(vehpose*LMStiltPose(tilt));

This computes the position and orientation of the LIDAR scan head in world space. Note that two transformation matrices have been combined by multiplying them. This seems to be correct, but note that this multiplication is not commutative.

LMStiltpose is worth a look.

const vec3 k_scanneroffset(1.65,0,2.08); // offset between GPS antenna and LMS scanner, meters

...this constant is passed into LMSmapupdater's constructor, where the scanner offset transform is precomputed.

m_scanneroffsetransform = translation3D(m_scanneroffset); // construct matrix for scanner offset from GPS pos


// LMStiltPose -- transformation from vehicle coords to scanner coords
// Remember, Z is upward.
// mat4 LMSmapUpdater::LMStiltPose(float tilt)

////float tiltdeg = tilt*(180/M_PI) - 90; // tilt in degrees 0 is horizontal, up is +
////float tiltdeg = tilt*(-180/M_PI) - 90; // tilt in degrees 0 is horizontal, up is +
float tiltdeg = tilt*(-180/M_PI) + 90; // tilt in deg., 0 is horiz., up is +
if (getVerboseLevel() >= 3)
{ printf("Tilt below horizontal. degrees: %f6.3\n",tiltdeg); dump(rotation3D(k_scannertiltaxis, tiltdeg),"tilt pose"); }
// compute scanner transform return(m_scanneroffsetransform*rotation3D(k_scannertiltaxis, tiltdeg));


The tilt pose matrix should transform a point in scanner coordinates to vehicle coordinates. And scannerpose should transform a point in scanner coordinates to world coordinates.

The actual scan data consists of arrays of distances, in integer centimeters, from the scanner. Ranges are first computed to floating point, and meters, in LMSupdateScanlinePair.

// Get relevant ranges and convert to meters.
// Invalid values become zero.
float r1a = LMSrangeConvert(lp1.m_range[center+i-1]);
float r1b = LMSrangeConvert(lp1.m_range[center+i]);
float r2a = LMSrangeConvert(lp2.m_range[center+i-1]);
float r2b = LMSrangeConvert(lp2.m_range[center+i]);
// Calculate minimum range for each triangle. This is used only so that near updates override far ones. float minrange1 = std::min(std::min(r1a,r1b),r2a);
float minrange2 = std::min(std::min(r1b,r2a),r2b);
if (minrange1 <= 0 || minrange2 <= 0)
{ // ***Check for "pollution" value. }
// Compute the three points of each triangle.
Note that this may take place with zero (bogus) ranges.
// These points are in scanner space.
vec3 p1a(k_scan_table.scanVector(i-1, lp1odd)*r1a);
vec3 p1b(k_scan_table.scanVector(i, lp1odd)*r1b);
vec3 p2a(k_scan_table.scanVector(i-1, lp2odd)*r2a);
vec3 p2b(k_scan_table.scanVector(i, lp2odd)*r2b);
if (getVerboseLevel() >= 4) // really, really verbose
{ printf("Pos %3d: ranges %6.2f %6.2f %6.2f %6.2f\n", i, r1a,r1b,r2a,r2b); dump(p1a,"p1a");
// ***TEMP*** very verbose
// Single point check for threatening points. // We only look at one point of each group, so we examine each point exactly once. float threatrange = p1a*vec3(1,0,0);
// range from scanner, forward, in scan plane
// Transform points into world space
p1a = scannerpose1*p1a;
p1b = scannerpose1*p1b;
p2a = scannerpose2*p2a;
p2b = scannerpose2*p2b;

The computations involving k_scan_table convert LIDAR ranges into points in the plane of the scan, using a precomputed table of radial vectors. X is ahead and Y is to the right.

Those last multiplications transform points in scanner space into points in world space, which is where everything comes together.

Somewhere this is wrong when the vehicle is not on flat terrain. Where?