Tag Archives: IMU

Body/spine and improved leg kinematics

The Python test program has been updated to include the additional spine joints, transformation between the robot and world coordinates, and leg targets which take orientation into account.

This test script is used in anticipation for controlling the actual robot’s servos.

 


Spine joints

The “spine” consists of two joints which will allow the front of the robot to pitch and yaw independently of the rear. This will give it more flexibility when turning and handling uneven terrain, as well as other tasks such as aiming its sensors at the world.

Since the spine joints are quite simple, I don’t think there is any need to create IK for this section.

 

Main Assembly v114 Spine

The “spine” joints separate the body of the quadruped between mostly similar front and rear sections.

 


Body and spine orientation

The robot body now takes into account that it has to be oriented w.r.t. the “world”. This will be physically achieved by the information acquired by an IMU sensor. If the robot is tilted forwards, the targets for the legs will have to be adjusted so that the robot maintains its balance.

I have defined the kinematics in a way that if the the robot was to rotate w.r.t. the world, the whole body rotates (this can be achieved by moving the test Roll/Pitch/Yaw sliders). However if the servo joints of the spine are moved (test joint 1 / joint 2 sliders) the rear section of the robot will move w.r.t the world, and the rear legs will move with it, while the front section won’t change w.r.t. the world.

In order to achieve this, the leg IK had to be updated so that now the base frames of the front legs are linked to the front section of the robot, and the base frames of the rear legs are linked to the rear section.

You might notice while orientation will be defined by an IMU, pure translation (movement in XYZ) in the world is not considered for now, as it is meaningless without some sort of localisation capability in place. This could be achieved by a sensor (see below), but is an entirely separate challenge for a long way down the line (hint: SLAM).

Quadbot 17 Quad Kinematics_004

New leg targets: Foot roll/pitch can be attained (within limits). In addition, the robot base can be positioned with respect to an outside world frame.

Quadbot 17 Quad Kinematics_002

Original leg targets: The feet are always pointing vertical to the ground.

 


Target roll and yaw

Initially, the leg target was simply a position in 3D for the foot link, and the foot was always pointing perpendicular to the ground, which made the inverse kinematics fairly easy. In version 2, the target orientation is now also taken into account. Actually, the pitch and roll can be targeted, but yawing cannot be obtained, simply because of the mechanics of the legs. Yawing, or turning, can be done by changing the walking gait pattern alone,  but the idea is that the spine bend will also aid in steering the robot (how exactly I don’t know yet, but that will come later!).

Getting the kinematics to work were a bit trickier than the original version, mainly because the “pitching” orientation of the leg can only be achieved by the positioning of joint 4, whereas the “rolling” orientation can only be achieved by the positioning of joint 5. The available workspace of the foot is also somewhat limited, in part due to that missing yaw capability. Particularly at positions when the leg has to stretch sideways (laterally) then certain roll/pitching combinations are impossible to reach. Nevertheless, this implementation gives the feet enough freedom to be placed on fairly uneven surfaces, and not be constrained to the previously flat plane.

The next challenge that follows from this, is how can realistic target positions and orientations be generated (beyond predetermined fixed walking gaits), to match what the robot sees of the world?

To answer this, first I need to decide how the robot sees the world: Primarily it will be by means of some 3D scanner, such as the ones I’ve looked into in the past, or maybe the Intel RealSense ZR300 which has recently caught my attention. But this alone might not be sufficient, and some form of contact sensors on the feet may be required.

The big question is, should I get a RealSense sensor for this robot ??? 🙂

 


 

Updated code can be found on GitHub (single-file test script is starting to get long, might be time to split up into class files!).

 

 

Balancing ideas

I had briefly tried in the past a simple balancing function using a PID controller, which aims to balance the Bioloid using the ankle motors, by trying to keep the IMU (accelerometer/gyro) vertical. The result was mixed success, but was only an early test. I am considering revisiting this balancing test, but this time using a number of PID controllers to control multiple groups of leg motors(e.g. hips, knees and ankles), while also using the GUI to make testing faster and easier.

 

On another note, I recently came across the Nengo Neural Simulator, which is a framework for creating neural networks of leaky integrate-and-fire (LIF) neurons for creating complex computational models. It has been used to create Spaun, the world’s largest functional brain model which is able to perform a number of functions such as vision, memory, counting, as well as drawing what is sees by controlling a simple arm.

What stands out is how easy it is to use the Nengo GUI to build neural networks. The interface runs in the browser and visualisations of neuron spiking activity and other metrics are easily shown for each graphical object. There is also support for scripting in Python. Installing it and trying it out for yourself is pretty simple, just follow the Getting Started guide here.

 

It would be really interesting to see if some form of PID controller could be built using Nengo, and then used to control the Bioloid’s balancing!

Sensor grapher

A sensor plotting window has now been added to the GUI, which shows all data from the IMU and pressure sensors. The accelerometer, magnetometer, gyroscope, heading data and Force Sensing Resistors’ (FSRs) data are all published as ROS messages as shown in this post, so reading them in the Qt GUI is fairly straightforward, in a similar way to how the joint states are being read. The graphs are made using a third party library for Qt called QCustomPlot.

Each graphs show a scrolling 10 second window of buffered data, which can be paused/played. With QCustomPlot it’s easy to enable user interactions with graphs (drag axis ranges with mouse, zoom with mouse wheel, etc.), so I enabled this option whenever the graph is in a paused state.

The y-axis units are currently showing raw data, which I will probably update to show standard values.

Screenshot from 2016-04-03 19-46-07.png

A useful thing I found in Qt with QDockWidget, which is used to create dockable/floating sub-windows, is that these widgets can also be tabbed to save space on the screen. How can this be done in code you may ask? The useful functions I found were: setDockNestingEnabled() (or setDockOptions()), tabifyDockWidget() for QMainWindow, and raise() to select the default tabbed dock widget you want displayed.

That’s pretty much all there is to the sensor grapher. I might add more features to it in the future, but for now it does the basics!

ROS interconnections

In this post I will cover the current state of my ROS setup for controlling the Bioloid’s motors and custom hardware.

The rqt_graph package provides a way of visualising the interconnections between all running ROS nodes. The following graphs show the current setup:

Here is a breakdown of the running nodes:

  • bioloid_sensors (MCU Arduino code): The MCU is running an Arduino program using the rosserial_arduino library, and publishes the IMU’s data as ROS messages over serial-to-USB. The rosserial package was useful as an easy way of sending ROS messages from the MCU to the controlling PC.
  • serial_node: This is a node included in the rosserial_python package, which reads serial data from the MCU.
  • imu_tf_broadcaster: This is a custom node which uses the IMU’s data to calculate the robot’s orientation, as detailed in this previous post.
  • dummy_odom_frame_broadcaster: This is a static_transform_publisher node from tf, which transforms from the map to the odom frame without any actual change in orientation or position. This is simply to keep in line with the ROS conventions, as explained in this previous post.
  • ax_joint_controller: This is another custom node for communicating with the AX-12 servos. This is a work-in-progress, but currently acts as a ROS wrapper around the USB2Dynamixel/USB2AX library. It publishes the servo positions (in radians) as a ROS sensor_msgs JointState message, on the ax_joint_states topic. It also runs a number of ROS services for controlling the servos from other ROS nodes. These are either higher level commands (such as for setting all motors to home position), or low level commands for directly reading from or writing to the AX-12 Control Table’s addresses.
  • joint_state_publisher: The joint_state_publisher is a tool for setting and publishing joint state values for a given URDF file. The node here reads the values from the ax_joint_states topic (by setting the source_list parameter) and publishes on the joint_states topic. I suppose this node could currently be bypassed entirely, however it does provide a nice GUI window for visualising the joint positions as sliders, and is also useful for moving the robot’s joints in RViz when ax_joint_controller is offline.
  • robot_state_publisher: The robot_state_publisher works in tandem with the joint_state_publisher and publishes the state of the robot to tf. The robot_state_publisher reads the configuration of the kinematic chain from the URDF file (set by the robot_description parameter). The set up of the URDF file has been covered in a previous post. The virtual model of the robot is then displayed in RViz, with joint positions reflecting those of the physical robot.

That is about it for the current setup. I have some work-in-progress nodes which interface with the above framework and test some of the robot’s motions, so in a following post I will discuss these and show some new videos of the physical robot in action!

Using quaternions to create a better IMU complementary filter

A better alternative to the RPY approach

After realising in my previous post that solving the gimbal lock problem for the complementary filter requires fiddly and inelegant fixes, I decided to dive into the world of quaternions. Luckily, the theory behind quaternions doesn’t need to be fully understood, in order to use them to represent 3D rotations. Particularly useful are the conversions between quaternion and rotation matrix, and axis-angle representation.

I initially tried to make the Arduino MCU (A-Star) perform the filtering process and pass the IMU frame transform to the PC, however this presented a couple of issues, the first being that there is no standard vector and matrix library in Arduino. Second, although I could write a very simple library for vector cross and dot products etc., the MCU’s 32 KB of flash memory was already almost full with all the rosserial and IMU libraries, leaving little space for anything else.
Hence I opted for letting the MCU simply pass a number of ROS message streams over to the PC, which could then do the required transformations using the ROS tf library. The Arduino code is listed at the end of this post (bioloid_sensors.ino).

The MCU reads the MinIMU-9 v3 IMU data and publishes it as ROS messages on the following topics:

  • LSM303 accelerometer x, y, z values (“accel”)
  • LSM303 magnetometer x, y, z values (“magnet”)
  • L3G gyroscope x, y, z values (“gyro”)
  • LSM303 magnetometer heading value (“heading”), derived from the LSM303’s library, as the angular difference in the horizontal plane between the x axis and magnetic North

ROS has a very handy plotting tool for topics, called rqt_plot. Below is an example dump of all the IMU data.

IMU messages plotted in rqt_plot

IMU messages plotted in rqt_plot


Quaternion-based filter

Having spent too much time on the RPY approach already, I wanted to find a simple way to achieve a relatively stable orientation from the IMU readings. I ended up implementing the method from this AHRS maths tutorial from the Firetail UAV system. The method is explained very well in that link, so there is no need to go into the details. The only change I have made is in the calculation of the filter coefficient, based on a set time constant as was done previously. My version using a ROS subscriber on the PC side is again listed at the end of this post (imu_tf_broadcaster.cpp and .h files).


Videos

Here are two videos comparing the original RPY approach against the improved quaternion approach. The resulting IMU transform is published by tf, which is used by the robot model you see in RViz, the ROS visualisation tool. The model is generated using a URDF file; I will explain this in detail in a following post.

Although in both cases the rotation is fairly smooth, you can see the problems that the RPY filtering encounters when it nears the gimbal lock position (when the robot lies horizontally). For my purposes of orientating the robot, I think the current quaternion approach is perfectly suited. I doubt I will be needing to play around with Kalman filters and the likes, as I currently don’t need the precision that UAVs etc. may need!

So that’s it for IMUs and orientation for the time being. In my next post I will start detailing the current progress with the virtual bioloid in ROS, which was seen in the above videos.


Code (click to expand):


// Hardware:
// Pololu A-Star 32U4 Mini SV
// Pololu MinIMU-9 v3 (L3GD20H and LSM303D)
// Interlink FSR 400 Short (x6)

// Important! Define this before #include <ros.h>
#define USE_USBCON

#include <Wire.h>
#include <ros.h>
#include <std_msgs/Int16MultiArray.h>
#include <geometry_msgs/Vector3.h>
#include <std_msgs/Float32.h>
#include <AStar32U4Prime.h>
#include <LSM303.h>
#include <L3G.h>

// Set up the ros node and publishers
ros::NodeHandle nh;
std_msgs::Int16MultiArray msg_fsrs;
std_msgs::MultiArrayDimension fsrsDim;
ros::Publisher pub_fsrs("fsrs", &msg_fsrs);
geometry_msgs::Vector3 msg_accel;
ros::Publisher pub_accel("accel", &msg_accel);
geometry_msgs::Vector3 msg_magnet;
ros::Publisher pub_magnet("magnet", &msg_magnet);
std_msgs::Float32 msg_heading;
ros::Publisher pub_heading("heading", &msg_heading);
geometry_msgs::Vector3 msg_gyro;
ros::Publisher pub_gyro("gyro", &msg_gyro);
unsigned long pubTimer = 0;

const int numOfFSRs = 6;
const int FSRPins[] = {A0, A1, A2, A3, A4, A5};
int FSRValue = 0;
LSM303 compass;
L3G gyro;
const int yellowLEDPin = IO_C7;  // 13
int LEDBright = 0;
int LEDDim = 5;
unsigned long LEDTimer = 0;

void setup()
{
    // Array for FSRs
    msg_fsrs.layout.dim = &fsrsDim;
    msg_fsrs.layout.dim[0].label = "fsrs";
    msg_fsrs.layout.dim[0].size = numOfFSRs;
    msg_fsrs.layout.dim[0].stride = 1*numOfFSRs;
    msg_fsrs.layout.dim_length = 1;
    msg_fsrs.layout.data_offset = 0;
    msg_fsrs.data_length = numOfFSRs;
    msg_fsrs.data = (int16_t *)malloc(sizeof(int16_t)*numOfFSRs);

    nh.initNode();
    nh.advertise(pub_fsrs);
    nh.advertise(pub_accel);
    nh.advertise(pub_magnet);
    nh.advertise(pub_heading);
    nh.advertise(pub_gyro);

    // Wait until connected
    while (!nh.connected())
        nh.spinOnce();
    nh.loginfo("ROS startup complete");

    Wire.begin();

    // Enable pullup resistors
    for (int i=0; i<numOfFSRs; ++i)
        pinMode(FSRPins[i], INPUT_PULLUP);

    if (!compass.init())
    {
        nh.logerror("Failed to autodetect compass type!");
    }
    compass.enableDefault();

    // Compass calibration values
    compass.m_min = (LSM303::vector<int16_t>){-3441, -3292, -2594};
    compass.m_max = (LSM303::vector<int16_t>){+2371, +2361, +2328};

    if (!gyro.init())
    {
        nh.logerror("Failed to autodetect gyro type!");
    }
    gyro.enableDefault();

    pubTimer = millis();
}

void loop()
{
    if (millis() > pubTimer)
    {
        for (int i=0; i<numOfFSRs; ++i)
        {
            FSRValue = analogRead(FSRPins[i]);
            msg_fsrs.data[i] = FSRValue;
            delay(2);  // Delay to allow ADC VRef to settle
        }

        compass.read();
        gyro.read();

        // Compass - accelerometer:
        // 16-bit, default range +-2 g, sensitivity 0.061 mg/digit
        // 1 g = 9.80665 m/s/s
        // e.g. value for z axis in m/s/s will be: compass.a.z * 0.061 / 1000.0 * 9.80665
        //      value for z axis in g will be: compass.a.z * 0.061 / 1000.0
        // Gravity is measured as an upward acceleration:
        // Stationary accel. shows +1 g value on axis facing directly "upwards"
        // Convert values to g
        msg_accel.x = (float)(compass.a.x)*0.061/1000.0;
        msg_accel.y = (float)(compass.a.y)*0.061/1000.0;
        msg_accel.z = (float)(compass.a.z)*0.061/1000.0;

        // Compass - magnetometer:
        // 16-bit, default range +-2 gauss, sensitivity 0.080 mgauss/digit
        msg_magnet.x = (float)(compass.m.x);
        msg_magnet.y = (float)(compass.m.y);
        msg_magnet.z = (float)(compass.m.z);
        // Heading from the LSM303D library is the angular difference in
        // the horizontal plane between the x axis and North, in degrees.
        // Convert value to rads, and change range to +-pi
        msg_heading.data = ( (float)(compass.heading())*M_PI/180.0 );

        // Gyro:
        // 16-bit, default range +-245 dps (deg/sec), sensitivity 8.75 mdps/digit
        // Convert values to rads/sec
        msg_gyro.x = (float)(gyro.g.x)*0.00875*M_PI/180.0;
        msg_gyro.y = (float)(gyro.g.y)*0.00875*M_PI/180.0;
        msg_gyro.z = (float)(gyro.g.z)*0.00875*M_PI/180.0;

        pub_fsrs.publish(&msg_fsrs);
        pub_accel.publish(&msg_accel);
        pub_magnet.publish(&msg_magnet);
        pub_heading.publish(&msg_heading);
        pub_gyro.publish(&msg_gyro);

        pubTimer = millis() + 10;  // wait at least 10 msecs between publishing
    }

    // Pulse the LED
    if (millis() > LEDTimer)
    {
        LEDBright += LEDDim;
        analogWrite(yellowLEDPin, LEDBright);
        if (LEDBright == 0 || LEDBright == 255)
            LEDDim = -LEDDim;

        // 50 msec increments, 2 sec wait after each full cycle
        if (LEDBright != 0)
            LEDTimer = millis() + 50;
        else
            LEDTimer = millis() + 2000;
    }

    nh.spinOnce();
}


#include "imu_tf_broadcaster.h"

int main(int argc, char **argv)
{
    ros::init(argc, argv, "imu_tf_broadcaster");
    ros::NodeHandle n;
    ros::Rate loop_rate(1000);  // Hz

    Broadcaster broadcaster;

    ros::Subscriber accelSub   = n.subscribe("accel",   1000, &Broadcaster::accelCallback,   &broadcaster);
    ros::Subscriber magnetSub  = n.subscribe("magnet",  1000, &Broadcaster::magnetCallback,  &broadcaster);
    ros::Subscriber headingSub = n.subscribe("heading", 1000, &Broadcaster::headingCallback, &broadcaster);
    ros::Subscriber gyroSub    = n.subscribe("gyro",    1000, &Broadcaster::gyroCallback,    &broadcaster);

    while(n.ok())
    {
        //broadcaster.updateTimers();
        //broadcaster.updateRotation();
        broadcaster.tfBroadcaster->sendTransform(
            tf::StampedTransform(
                tf::Transform(broadcaster.getQ(), tf::Vector3(0.0, 0.0, 0.0)),
                ros::Time::now(),"odom", "imu_link"));
        loop_rate.sleep();
        ros::spinOnce();

//        std::cout << "dt: " << broadcaster.getDt() << std::endl;
//        std::cout << "q.x: " << broadcaster.getQ().x() << std::endl;
//        std::cout << "q.y: " << broadcaster.getQ().y() << std::endl;
//        std::cout << "q.z: " << broadcaster.getQ().z() << std::endl;
//        std::cout << "q.w: " << broadcaster.getQ().w() << std::endl;
//        std::cout << "----" << std::endl;
    }

    return 0;
}

Broadcaster::Broadcaster() :
    prevt(ros::Time::now().toSec()),
    dt(0.0),
    timeConst(1.0),
    filterCoeff(0.0)
{
    q.setRPY(0.0, 0.0, 0.0);
    tfBroadcaster = new tf::TransformBroadcaster();
}

Broadcaster::~Broadcaster()
{

}

void Broadcaster::accelCallback(const geometry_msgs::Vector3::ConstPtr& msg)
{
    accel.setX(msg->x);
    accel.setY(msg->y);
    accel.setZ(msg->z);
}

void Broadcaster::magnetCallback(const geometry_msgs::Vector3::ConstPtr& msg)
{
    magnet.setX(msg->x);
    magnet.setY(msg->y);
    magnet.setZ(msg->z);
}

void Broadcaster::headingCallback(const std_msgs::Float32::ConstPtr& msg)
{
    heading = msg->data;
}

void Broadcaster::gyroCallback(const geometry_msgs::Vector3::ConstPtr& msg)
{
    dt = ros::Time::now().toSec() - prevt;

    gyro.setX(msg->x);
    gyro.setY(msg->y);
    gyro.setZ(msg->z);

    angularVel.setX(gyro.x());
    angularVel.setY(gyro.y());
    angularVel.setZ(gyro.z());

    filterCoeff = timeConst / (timeConst + dt);

    // Use accelerometer and magnetometer data to correct gyro drift
    correctOrientation();

    updateRotation();

    prevt = ros::Time::now().toSec();

//    std::cout << "angularVel x: " << angularVel.x() << std::endl;
//    std::cout << "angularVel y: " << angularVel.y() << std::endl;
//    std::cout << "angularVel z: " << angularVel.z() << std::endl;
//    std::cout << "----" << std::endl;
}

void Broadcaster::updateRotation()
{
    // New quaternion, from axis-angle notation for gyro
    tf::Quaternion qNew(angularVel.normalized(), angularVel.length()*dt);

    // Update previous value
    q *= qNew;
    q.normalize();
}

void Broadcaster::correctOrientation()
{
    // Use acceleration data only if vector is close to 1 g
    if ( fabs(accel.length() - 1) <= 0.1 )
    {
        // These vectors have to be perpendicular.
        // As there is no guarantee that North is perpendicular to Down,
        // set North to the cross product of East and Down.
        // Gravity is measured as an upward acceleration.
        tf::Vector3 Down(accel);  // Should be -accel, but not sure why this produces inverted axes!?
        tf::Vector3 E = Down.cross(magnet);
        tf::Vector3 N = E.cross(Down);

        Down.normalize();
        E.normalize();
        N.normalize();

        // The rows of the rotation matrix represent the coordinates in the original
        // space of unit vectors along the coordinate axes of the rotated space
        tf::Matrix3x3 gyroRotMat = tf::Matrix3x3(q);

        // Correction vector
        tf::Vector3 cv = ( N.cross(gyroRotMat.getRow(0)) +
                           E.cross(gyroRotMat.getRow(1)) +
                           Down.cross(gyroRotMat.getRow(2)) ) * filterCoeff;

        angularVel += cv;
    }
}


#ifndef IMU_TF_BROADCASTER_H
#define IMU_TF_BROADCASTER_H

#include "ros/ros.h"
#include "geometry_msgs/Vector3.h"
#include "std_msgs/Float32.h"
#include "tf/transform_broadcaster.h"

class Broadcaster
{
public:
    Broadcaster();
    virtual ~Broadcaster();
    void accelCallback(const geometry_msgs::Vector3::ConstPtr& msg);
    void magnetCallback(const geometry_msgs::Vector3::ConstPtr& msg);
    void headingCallback(const std_msgs::Float32::ConstPtr& msg);
    void gyroCallback(const geometry_msgs::Vector3::ConstPtr& msg);
    //void updateTimers();
    void updateRotation();
    void correctOrientation();
    tf::TransformBroadcaster* tfBroadcaster;
    tf::Quaternion getQ() const {return q;}
    double getPrevt() const {return prevt;}
    double getDt() const {return dt;}
    float getTimeConst() const {return timeConst;}
    void setTimeConst(float value) {timeConst = value;}

private:
    tf::Vector3 accel;
    tf::Vector3 magnet;
    float heading;
    tf::Vector3 gyro;

    tf::Vector3 angularVel;
    tf::Quaternion q;
    double prevt;
    double dt;
    float timeConst;
    float filterCoeff;
};

#endif // IMU_TF_BROADCASTER_H

Using the Arduino to read sensor data

Force Sensing Resistors and Inertial Measurement Unit

The first hardware parts to be tested were the inertial measurement unit (IMU) and the force sensing resistors (FSRs). The Arduino IDE was used for programming. The FSRs are straightforward analogue inputs, so I will concentrate on the IMU.

In order to be in line with the ROS coordinate frame conventions, the robot’s base frame will be in this orientation (right-hand rule):

  • x axis: forward
  • y axis: left
  • z axis: up

For now I am assuming the IMU is in the same orientation, but as it is not yet mounted to the robot, it might change later on. Again, in keeping with the ROS conventions on relationships beteen coordinate frames, the following transforms and their relationships have to be defined:

\text{map} \rightarrow \text{odom} \rightarrow \text{base\_link}

All these frames and their relationships can be easily managed using the ROS tf package.
I have added the imu_link to this chain, which represents the IMU’s orientation with respect to the odom frame. An additional link is also added to transform the standard base_link to the robot’s torso orientation. So the transform chain actually looks like this:
\text{map} \rightarrow \text{odom} \rightarrow \text{imu\_link} \rightarrow \text{base\_link} \rightarrow \text{torso\_link}


Calculating roll/pitch/yaw

I initially decided to let the A-Star (Arduino-compatible) board perform the IMU calculations and pass roll/pitch/yaw (RPY) values, and pass them as a ROS message to the PC. A ROS node will read these values and broadcast the transform from odom to imu_link using tf.

The IMU I am using, the MinIMU-9 v3 from Pololu, consists of an LSM303D accelometer-magnetometer chip and an L3GD20H gyro chip in one single package. The accelerometer, magnetometer and gyro are all three-axis types, providing 9 degrees of freedom and allow for a full representation of orientation in space. The data can be easily read using the provided Arduino libraries.

Rotations in Cartesian space will be represented in terms of intrinsic Euler angles, around the three orthogonal axes x, y and z of the IMU. As the order in which rotations are applied is important, I decided to use the standard “aerospace rotation sequence”, which has an easy mnemonic: An aeroplane yaws along the runway, then pitches during takeoff, and finally rolls to make a turn once it is airborne. The rotation sequence is thus:

  • Rotation about z: yaw (±180°)
  • Rotation about y: pitch (±90°)
  • Rotation about x: roll (±180°)

Pitch has to be restricted to ±90° in order to represent any rotation with a single set of RPY values. The accelerometer alone can extract roll and pitch estimates, but not yaw, as the gravity vector doesn’t change when the IMU is rotated about the y axis. This is where the magnetometer (compass) comes in use.

Hence, using these accelerometer and magnetometer values, the IMU’s orientation can be fully described in space. However these measurements are very noisy, and should only be used as long-term estimates of orientation. Also, any accelerations due to movement and not just gravity add disturbances to the estimate. To compensate for this, the gyro readings can be used as a short term and more accurate representation orientation, as the gyro readings are more precise and not affected by external forces.


The complementary filter

Gyro measurements alone suffer from drift, so they have to be combined with the accelerometer/magnetometer readings to get a clean signal. A fairly simple way to do this is using what is known as a complementary filter. It can be implemented in code fairly simply, in the following way:

Roll_{curr} = C_f \times (Roll_{prev} + X_{gyro} \times dt) + (1 - C_f) \times Roll_{accel}
Pitch_{curr} = C_f \times (Pitch_{prev} + Y_{gyro} \times dt) + (1 - C_f) \times Pitch_{accel}
Yaw_{curr} = C_f \times (Yaw_{prev} + Z_{gyro} \times dt) + (1 - C_f) \times Yaw_{magnet}

On every loop, the previous RPY values are updated with the new IMU readings. The complementary filter is essentially a high-pass filter on the gyro readings, and a low-pass filter on the accelerometer/magnetometer readings. C_f is a filtering coefficient, which determines the time boundary between trusting the gyroscope vs trusting the accelerometer/magnetometer, defined as:
C_f = \frac{ t_{const} }{ (t_{const} + dt) }
dt is the loop time, and t_{const} is a time constant, which determines the length of recent history for which the gyro data takes precedence over the accelerometer/magnetometer data, in calculating the new values. I have set this to 1 second, so the accelerometer data from over a second ago will take precedence and correct any gyro drift. A good source of information on the complementary filter can be found here.


Issues with filtering roll/pitch/yaw values

However, when using RPY calculations, applying this filter is not so straightforward. Firstly, when the roll/yaw values cross over the ±180° point, the jump has to be taken into account otherwise it causes large erroneous jumps in the averaged value. This can be corrected fairly simply, however another problem is gimbal lock, which in this case is caused then the pitch crosses the +-pi/2 points (looking directly. This causes the roll/yaw values to jump abruptly, and as a result the IMU estimation “jumps” around the problem points. My attempt at solving this issue by resetting the estimated values whenever IMU was near the gimbal lock position was not very effective.

After a lot of frustration, I decided that learning how to reliably estimate position using quaternions was the way to go, although they may at first seem daunting. So at this point I abandoned any further experimentation using RPY calculations, however the work was a good introduction to the basics of the A-Star board and representing orientations using an IMU. The Arduino code is posted here for reference. Links to tutorials on using accelerometer and gyro measurements can be found in the comments.


Code (click to expand):


// Hardware:
// Pololu A-Star 32U4 Mini SV
// Pololu MinIMU-9 v3 (L3GD20H and LSM303D)
// Interlink FSR 400 Short (x6)

// Important! Define this before #include <ros.h>
#define USE_USBCON

#include <Wire.h>
#include <ros.h>
#include <std_msgs/Int16MultiArray.h>
#include <geometry_msgs/Vector3.h>
#include <AStar32U4Prime.h>
#include <LSM303.h>
#include <L3G.h>
#include <math.h>

// Set up the ros node and publishers
ros::NodeHandle nh;
std_msgs::Int16MultiArray msg_fsrs;
std_msgs::MultiArrayDimension fsrsDim;
ros::Publisher pub_fsrs("fsrs", &msg_fsrs);
geometry_msgs::Vector3 msg_rpy;
ros::Publisher pub_rpy("rpy", &msg_rpy);
unsigned long pubTimer = 0;

const int numOfFSRs = 6;
const int FSRPins[] = {A0, A1, A2, A3, A4, A5};
int FSRValue = 0;
LSM303 compass;
L3G gyro;
const int yellowLEDPin = IO_C7;  // 13
int LEDBright = 0;
int LEDDim = 5;
unsigned long LEDTimer = 0;

float aX = 0.0;
float aY = 0.0;
float aZ = 0.0;
float gX = 0.0;
float gY = 0.0;
float gZ = 0.0;
float accelRoll = 0.0;
float accelPitch = 0.0;
float magnetYaw = 0.0;
float roll = 0.0;
float pitch = 0.0;
float yaw = 0.0;
float accelNorm = 0.0;
int signOfz = 1;
int prevSignOfz = 1;
float dt = 0.0;
float prevt = 0.0;
float filterCoeff = 1.0;
float mu = 0.01;
float timeConst = 1.0;

void setup()
{
    // Array for FSRs
    msg_fsrs.layout.dim = &fsrsDim;
    msg_fsrs.layout.dim[0].label = "fsrs";
    msg_fsrs.layout.dim[0].size = numOfFSRs;
    msg_fsrs.layout.dim[0].stride = 1*numOfFSRs;
    msg_fsrs.layout.dim_length = 1;
    msg_fsrs.layout.data_offset = 0;
    msg_fsrs.data_length = numOfFSRs;
    msg_fsrs.data = (int16_t *)malloc(sizeof(int16_t)*numOfFSRs);

    nh.initNode();
    nh.advertise(pub_fsrs);
    nh.advertise(pub_rpy);

    // Wait until connected
    while (!nh.connected())
        nh.spinOnce();
    nh.loginfo("ROS startup complete");

    Wire.begin();

    // Enable pullup resistors
    for (int i=0; i<numOfFSRs; ++i)
        pinMode(FSRPins[i], INPUT_PULLUP);

    if (!compass.init())
    {
        nh.logerror("Failed to autodetect compass type!");
    }
    compass.enableDefault();

    // Compass calibration values
    compass.m_min = (LSM303::vector<int16_t>){-3441, -3292, -2594};
    compass.m_max = (LSM303::vector<int16_t>){+2371, +2361, +2328};

    if (!gyro.init())
    {
        nh.logerror("Failed to autodetect gyro type!");
    }
    gyro.enableDefault();

    pubTimer = millis();
    prevt = millis();
}

void loop()
{
    if (millis() > pubTimer)
    {
        for (int i=0; i<numOfFSRs; ++i)
        {
            FSRValue = analogRead(FSRPins[i]);
            msg_fsrs.data[i] = FSRValue;
            delay(2);  // Delay to allow ADC VRef to settle
        }

        compass.read();
        gyro.read();

        if (compass.a.z < 0)
            signOfz = -1;
        else
            signOfz = 1;

        // Compass - accelerometer:
        // 16-bit, default range +-2 g, sensitivity 0.061 mg/digit
        // 1 g = 9.80665 m/s/s
        // e.g. value for z axis in m/s/s will be: compass.a.z * 0.061 / 1000.0 * 9.80665
        //      value for z axis in g will be: compass.a.z * 0.061 / 1000.0
        //
        // http://www.analog.com/media/en/technical-documentation/application-notes/AN-1057.pdf
        //accelRoll = atan2( compass.a.y, sqrt(compass.a.x*compass.a.x + compass.a.z*compass.a.z) ) ;
        //accelPitch = atan2( compass.a.x, sqrt(compass.a.y*compass.a.y + compass.a.z*compass.a.z) );
        // Angle between gravity vector and z axis:
        //float accelFi = atan2( sqrt(compass.a.x*compass.a.x + compass.a.y*compass.a.y), compass.a.z );
        //
        // Alternative way:
        // http://www.freescale.com/files/sensors/doc/app_note/AN3461.pdf
        // Angles - Intrinsic rotations (rotating coordinate system)
        // fi: roll about x
        // theta: pitch about y
        // psi: yaw about z
        //
        // "Aerospace rotation sequence"
        // yaw, then pitch, then roll (matrices multiplied from right-to-left)
        // R_xyz = R_x(fi) * R_y(theta) * R_z(psi)
        //
        // Convert values to g
        aX = (float)(compass.a.x)*0.061/1000.0;
        aY = (float)(compass.a.y)*0.061/1000.0;
        aZ = (float)(compass.a.z)*0.061/1000.0;
        //accelRoll = atan2( aY, aZ );
        // Singularity workaround for roll:
        accelRoll  = atan2( aY, signOfz*sqrt(aZ*aZ + mu*aX*aX) );  // +-pi
        accelPitch = atan2( -aX, sqrt(aY*aY + aZ*aZ) );            // +-pi/2

        // Compass - magnetometer:
        // 16-bit, default range +-2 gauss, sensitivity 0.080 mgauss/digit
        // Heading from the LSM303D library is the angular difference in
        // the horizontal plane between the x axis and North, in degrees.
        // Convert value to rads, and change range to +-pi
        magnetYaw = - ( (float)(compass.heading())*M_PI/180.0 - M_PI );

        // Gyro:
        // 16-bit, default range +-245 dps (deg/sec), sensitivity 8.75 mdps/digit
        // Convert values to rads/sec
        gX = (float)(gyro.g.x)*0.00875*M_PI/180.0;
        gY = (float)(gyro.g.y)*0.00875*M_PI/180.0 * signOfz;
        gZ = (float)(gyro.g.z)*0.00875*M_PI/180.0;

        dt = (millis() - prevt)/1000.0;

        filterCoeff = 1.0;
        // If accel. vector magnitude seems correct, e.g. between 0.5 and 2 g, use the complementary filter
        // Else, only use gyro readings
        accelNorm = sqrt(aX*aX + aY*aY + aZ*aZ);
        if ( (0.5 < accelNorm) || (accelNorm > 2) )
            filterCoeff = timeConst / (timeConst + dt);

        // Gimbal lock: If pitch crosses +-pi/2 points (z axis crosses the horizontal plane), reset filter
        // Doesn't seem to work reliably!
        if ( (fabs(pitch - M_PI/2.0) < 0.1) && (signOfz != prevSignOfz) )
        {
            roll += signOfz*M_PI;
            yaw += signOfz*M_PI;
        }
        else
        {
            // Update roll/yaw values to deal with +-pi crossover point:
            if ( (roll - accelRoll) > M_PI )
                roll -= 2*M_PI;
            else if ( (roll - accelRoll) < - M_PI )
                roll += 2*M_PI;
            if ( (yaw - magnetYaw) > M_PI )
                yaw -= 2*M_PI;
            else if ( (yaw - magnetYaw) < - M_PI )
                yaw += 2*M_PI;

            // Complementary filter
            // https://scholar.google.co.uk/scholar?cluster=2292558248310865290
            roll  = filterCoeff*(roll  + gX*dt) + (1 - filterCoeff)*accelRoll;
            pitch = filterCoeff*(pitch + gY*dt) + (1 - filterCoeff)*accelPitch;
            yaw   = filterCoeff*(yaw   + gZ*dt) + (1 - filterCoeff)*magnetYaw;
        }

        roll  = fmod(roll,  2*M_PI);
        pitch = fmod(pitch,   M_PI);
        yaw   = fmod(yaw,   2*M_PI);

        msg_rpy.x = roll;
        msg_rpy.y = pitch;
        msg_rpy.z = yaw;

        pub_fsrs.publish(&msg_fsrs);
        pub_rpy.publish(&msg_rpy);

        prevSignOfz = signOfz;

        prevt = millis();

        pubTimer = millis() + 10;  // wait at least 10 msecs between publishing
    }

    // Pulse the LED
    if (millis() > LEDTimer)
    {
        LEDBright += LEDDim;
        analogWrite(yellowLEDPin, LEDBright);
        if (LEDBright == 0 || LEDBright == 255)
            LEDDim = -LEDDim;

        // 50 msec increments, 2 sec wait after each full cycle
        if (LEDBright != 0)
            LEDTimer = millis() + 50;
        else
            LEDTimer = millis() + 2000;
    }

    nh.spinOnce();
}