Tag Archives: Bioloid

Depth sensor

I have finally made a decision on the sensor and ordered the Intel RealSense D435 Depth Camera! I also found high-quality CAD models of the whole D400 series here: Production CAD Files for Intel RealSense D400 Series.

After importing into Fusion 360 and adding a touch of texturing, I added it to the robot model to show a sense of scale. It needs a 3D printed mount to be properly attached to the top of the front body; or better yet, a re-design of the front bumper to integrate the camera inside it, giving an unobstructed forward-facing view.

The camera will be controlled by PC/laptop for now, but will later need to be controlled by a beefier on-board controller.

Sensor mount

The mount will be made with some standard Bioloid parts and 2 additional AX-12 servo motors. Here is the current concept:

Balancing ideas

I had briefly tried in the past a simple balancing function using a PID controller, which aims to balance the Bioloid using the ankle motors, by trying to keep the IMU (accelerometer/gyro) vertical. The result was mixed success, but was only an early test. I am considering revisiting this balancing test, but this time using a number of PID controllers to control multiple groups of leg motors(e.g. hips, knees and ankles), while also using the GUI to make testing faster and easier.

 

On another note, I recently came across the Nengo Neural Simulator, which is a framework for creating neural networks of leaky integrate-and-fire (LIF) neurons for creating complex computational models. It has been used to create Spaun, the world’s largest functional brain model which is able to perform a number of functions such as vision, memory, counting, as well as drawing what is sees by controlling a simple arm.

What stands out is how easy it is to use the Nengo GUI to build neural networks. The interface runs in the browser and visualisations of neuron spiking activity and other metrics are easily shown for each graphical object. There is also support for scripting in Python. Installing it and trying it out for yourself is pretty simple, just follow the Getting Started guide here.

 

It would be really interesting to see if some form of PID controller could be built using Nengo, and then used to control the Bioloid’s balancing!

Motor dials updated

I have made some updates to the motor dials which control the motor positions. They can now change mode and control motor speed and load. Also, the GUI is regularly updated with some important feedback from each motor: motor voltage and temperature, LED and torque on/off state, and feedback on all the alarm states.

At this point I’m starting to think that an internal model of all the motor control table data would be useful at this point! Rather than classes making direct requests to the motor controller to receive motor information, all the state data could be kept by the controller and updated regularly. Classes would then simply get the latest data from this model when needed. This is however partially the way the controller works already, as it has a model of the ROS-style joint states which hold present positions, present speeds and present torques (loads), as well as goal positions, moving speeds and torque limits. The joint states are published continuously by a ROS publisher. Present and goal positions are the most important data, as the AX-12 by default only performs position control. Moving speed is simply the speed that the motor will use to move between positions, so cannot be used for e.g. a velocity feedback loop. “Torque” is a bit of a misleading term here, as there is no torque sensing in the motors. Torque sensors are only available in much more expensive motors than these. The load values reported by the AX-12 are related to the motor current, but cannot be read while the motor is actually moving. Two notable sources which have more detailed information on this somewhat unclear measurement can be found here on the RoboSavvy forum and here.

 

I think I’m done with updating these graphical widgets for the time being, as it is detracting from the main goals of exploring MoveIt! and getting the robot walking.

Sensor grapher

A sensor plotting window has now been added to the GUI, which shows all data from the IMU and pressure sensors. The accelerometer, magnetometer, gyroscope, heading data and Force Sensing Resistors’ (FSRs) data are all published as ROS messages as shown in this post, so reading them in the Qt GUI is fairly straightforward, in a similar way to how the joint states are being read. The graphs are made using a third party library for Qt called QCustomPlot.

Each graphs show a scrolling 10 second window of buffered data, which can be paused/played. With QCustomPlot it’s easy to enable user interactions with graphs (drag axis ranges with mouse, zoom with mouse wheel, etc.), so I enabled this option whenever the graph is in a paused state.

The y-axis units are currently showing raw data, which I will probably update to show standard values.

Screenshot from 2016-04-03 19-46-07.png

A useful thing I found in Qt with QDockWidget, which is used to create dockable/floating sub-windows, is that these widgets can also be tabbed to save space on the screen. How can this be done in code you may ask? The useful functions I found were: setDockNestingEnabled() (or setDockOptions()), tabifyDockWidget() for QMainWindow, and raise() to select the default tabbed dock widget you want displayed.

That’s pretty much all there is to the sensor grapher. I might add more features to it in the future, but for now it does the basics!

List of ideas

As I have many ideas floating about on what to work on next, I made a list to see try and see which ones are worth prioritising:

  • GUI updates
    • Graph for visualising sensor data
    • Improved motion editing
  • Static balancing
  • Implement MoveIt! trajectory following via GUI
  • Walking routines
  • Advanced movement: Trajectories generate from MoveIt!, combined with active balancing as the robot moves

This is just an initial list, which I hope to expand on in the future.

My immediate plans right now are to continue tidying up some GUI graphical details, implement a simple sensor data plotter (I know rqt_plot does a fairly good job of this already, but I’d like to have sensor data integrated in my GUI) and get back to trajectory generation tests using MoveIt!

More software updates

With the new PC’s development tools up and running, I’ve made a number of updates to the GUI side of things, as well as the background joint controller node. Here are some of the most important from this month.


Joint controller updates

An issue I was having with the joint controller node – a main function of which is to act as a ROS wrapper class around the Dynamixel API – was that I seemed to be getting erroneous values when executing a simple one-off read from the motors. I suspected this was related to the fact that the node is also constantly reading the motors’ feedback values as fast as possible in its main loop. Even though the Dynamixel library seems to have checks for a busy comms bus, there was likely some issue with multiple threads trying to access the bus. This was easily fixed by using ros::spinOnce() in the main loop instead of using an AsyncSpinner which starts up a separate thread.

There is still some problem where positional values for some of the motors seem to be getting constantly corrupted, but only when all the torques are enabled and the motors are struggling to achieve all their goals positions (making the typical whining noise that servos have). I’ve yet to narrow down if this is a software issue or not.

Update to read/write services

The ROS read/write services have been simplified now, and two of the most useful commands are the ones that perform a sync read and write across a number of motors. As these are ROS services they can also be called from a terminal command line. For example, to receive the current position, current speed and current torque of motors 1, 3 and 5 all at once, you can run:

rosservice call /ReceiveSyncFromAX '[1, 3, 5]' 36 3

36 is the start address for the low byte of the present position, and 3 indicates the number of control table values, including the start address, to read for each motor. This is calling the sync_read function that the USB2AX offers, via a ROS service.

In a similar way you can write goal position (100), goal speed (300) and max torque (512) to motors 1, 3 and 5 all at once by running:

rosservice call /SendSyncToAX '[1, 3, 5]' 30 '[100, 300, 512, 100, 300, 512, 100, 300, 512]'

One restriction is that the sync read and write functions can only read/write consecutive control table addresses, as explained in the USB2AX link above, as well as here in the Robotis documentation.


New motor control table value editor

On the GUI side of things, I’ve made a new “motor address editor” as an improvement on the previous “motor value editor”. The new editor allows reading/writing of all control table addresses of the AX-12. You don’t have to worry about writing low/high bytes to the two-byte values, as that is handled by the editor; simply write any valid value to the parameter of interest.

I have kept the old editor as two useful features it has is let me easily write the same value to all motors, as well as send position/speed/torque in “standard” units (rad, rad/s and torque %) instead of raw values.


Recording and executing poses

I added a new test function which moves the robot through a sequence of queued poses that are saved in the GUI. The robot pauses between each sequence based on each pose’s specified delay time.

Because the function’s execution has to pause and wait for the specified dwell time time, I added the code into a separate thread. Originally I did this by subclassing QThread, but then updated it and created a “worker object” which is moved to a thread using moveToThread(), as this seems to be the better way in Qt.

Although this is very simplistic motion editor functionality which has been done many times before by other tools, I thought it would be useful to add to my GUI, as it could prove useful in the future as a complement or even a replacement of the MoveIt! ideas if they don’t work out. This GUI is now essentially a one-stop shop for easily testing various ROS functionality and other ideas as I keep on developing along the way!

Fitting the foot force sensors

I finally got round to installing the force sensing resistors to the underside of the Bioloid’s feet. I am using the FSR 400 Short model, with male end connectors, which are small and flat and didn’t fit in the square female header pins which often come at the end of breadboard wires. I didn’t want to solder directly as I’ve damaged these sensors in the past, so I followed a suggestion on the Adafruit website and ended up using these PCB terminal blocks.

The force sensing element of the FSR fits nicely in the pre-existing cut-out on the underside of the Bioloid foot plate, and the pins are fed through a hole in the plate so they can be connected on the topside. However, in the standard Bioloid humanoid configuration, the foot plates are attached at an offset from the ankle angle brackets, which means there is not enough space between the footplate and the bracket to connect the pins. To fix this, I moved the footplates so that the ankle brackets sit along their centres. There is still some free space between the footplates when the Bioloid sits in its resting position, so hopefully the feet will not collide with each other after this change.

For completeness, I also made changes to the CAD model and also added FSR CAD models to the feet, although that is just a cosmetic addition to the model (on a related side-note, I also found a CAD model of the USB2AX which adds to the realism)! The robot’s URDF files and CAD models have been updated on GitHub.

Here are some pictures of the results:

Backpack printed

The backpack print from Shapeways has arrived!

Overall I am very pleased with the quality of the print. The walls are only 1.22 mm thick which makes the design somewhat delicate with the standard plastic material, so if I had to print it again I would thicken them. The dimensions perfectly match the screws, battery and back of robot chest, so the backpack fits in without any problems. I should have also added a few more holes around the sides, as the wiring is fairly cramped. The battery cabling is fairly awkward, maybe I should have gone with an even smaller one than originally planned!

Along with the battery, the backpack also houses the SMPS2Dynamixel servo power adaptor and a bunch of wiring. The power adaptor has a tricky shape with servo connectors at a right angle to the DC jack, but I was able to use a smaller plug with its plastic casing removed in order to fit it inside the limited space.

Hopefully the battery is close enough to the centre of mass, and not too high, so it remains to be seen if the servos are powerful enough to keep the robot standing and moving upright. Now it is back to software, to get the robot to finally do some interesting moves!

New Bioloid backpack

Adding to the endless stream of distractions from the main programming, I decided that the current mess of wiring and components on the back of the Bioloid needed tidying up properly. So, I have made a 3D model for a backpack, and I’m waiting for it to be 3D printed at Shapeways. The dimensions are similar to those of the original case which held the CM-5. The two large holes on the sides are for slotting in the new 2200 mAh battery, as it was too big to fit within the backpack.

Playing around with Gazebo

I thought it would be interesting to see if I could get my current Bioloid URDF file into Gazebo and try out some simulations. It turned out not to be too difficult; in order to load the model in Gazebo, all I had to add to the URDF were inertial elements for each of the links.

I set up a couple of tables and beer cans in the scene, and dropped the poor Bioloid from a height!

If you already have a URDF description of your robot, Gazebo is fairly easy to set up and has great potential for any simulation-based project, however as I am focusing more on the physical robot, I probably won’t be using it much for the time being. That, plus the fact that my laptop can barely handle the program without overheating!

Heads up!

The Raspberry Pi has now been fitted to the Bioloid using some spare brackets and sponge padding.

A GUI for the Pi’s screen has also been made using Python, Tkinter and the rospy library. This is actually my first Python program from scratch, so it’s far from perfect but it’s simple and does the job for now!

The program visualises the sensor data from the force sensors and IMU, which the Arduino is publishing on ROS topics. The slider values are unfiltered data, the same which is used by another ROS node to fuse the IMU data and provide a good estimate of the torso’s 3D orientation. The roll/pitch/yaw widgets are used to visualise a simple transformation of the accelerometer/magnetometer data, as was performed previously here. The force sensing resistor (FSR) widget is not active, as the resistors have not been wired in yet, but eventually each FSR’s value will be visualised with a coloured square box ranging from yellow (un-pressed) to red (fully pressed).

ROS interconnections

In this post I will cover the current state of my ROS setup for controlling the Bioloid’s motors and custom hardware.

The rqt_graph package provides a way of visualising the interconnections between all running ROS nodes. The following graphs show the current setup:

Here is a breakdown of the running nodes:

  • bioloid_sensors (MCU Arduino code): The MCU is running an Arduino program using the rosserial_arduino library, and publishes the IMU’s data as ROS messages over serial-to-USB. The rosserial package was useful as an easy way of sending ROS messages from the MCU to the controlling PC.
  • serial_node: This is a node included in the rosserial_python package, which reads serial data from the MCU.
  • imu_tf_broadcaster: This is a custom node which uses the IMU’s data to calculate the robot’s orientation, as detailed in this previous post.
  • dummy_odom_frame_broadcaster: This is a static_transform_publisher node from tf, which transforms from the map to the odom frame without any actual change in orientation or position. This is simply to keep in line with the ROS conventions, as explained in this previous post.
  • ax_joint_controller: This is another custom node for communicating with the AX-12 servos. This is a work-in-progress, but currently acts as a ROS wrapper around the USB2Dynamixel/USB2AX library. It publishes the servo positions (in radians) as a ROS sensor_msgs JointState message, on the ax_joint_states topic. It also runs a number of ROS services for controlling the servos from other ROS nodes. These are either higher level commands (such as for setting all motors to home position), or low level commands for directly reading from or writing to the AX-12 Control Table’s addresses.
  • joint_state_publisher: The joint_state_publisher is a tool for setting and publishing joint state values for a given URDF file. The node here reads the values from the ax_joint_states topic (by setting the source_list parameter) and publishes on the joint_states topic. I suppose this node could currently be bypassed entirely, however it does provide a nice GUI window for visualising the joint positions as sliders, and is also useful for moving the robot’s joints in RViz when ax_joint_controller is offline.
  • robot_state_publisher: The robot_state_publisher works in tandem with the joint_state_publisher and publishes the state of the robot to tf. The robot_state_publisher reads the configuration of the kinematic chain from the URDF file (set by the robot_description parameter). The set up of the URDF file has been covered in a previous post. The virtual model of the robot is then displayed in RViz, with joint positions reflecting those of the physical robot.

That is about it for the current setup. I have some work-in-progress nodes which interface with the above framework and test some of the robot’s motions, so in a following post I will discuss these and show some new videos of the physical robot in action!

Wiring up the electronics

It’s been a while since the last update! I have mostly been working on the commununication between ROS and the servos, and now have a working read-write ROS servo interface, but will post about this later.

In the meanwhile, I added a bar display made up of 10 green LEDs for information feedback, as well as 4 blue decorative LEDs. I then moved the breadboarded electronics to a more permanent prototype board. I was initially hoping to make a box-shaped board which could fit nicely inside the Bioloid’s hollow “groin”, and would have the LED bar display in the front. However the USB cable connector protruded too much, so the LED bar will have to sit one side of the board. I also managed to squeeze the IMU on the underside of the board. There is still some room above the board, so I may be able to squeeze even more of the wiring/electronics in there.

Once the board was ready, I temporarily put all the hardware parts in roughly the right locations to gauge how much space will be taken up and how long the wiring should be. I have removed the AX-S1 sensor, as I probably won’t have much use for it. I think the Raspberry will sit much better on top of the torso rather than on the back, where the old CM-5 used to be. It’s very tempting to turn the screen into a VR face! As you can see, it will be a struggle to mount the massive 5500 mAh battery on the back, so I am planning on downsizing.

Here are some work-in-progress pictures plus a video:

Representing the robot in ROS

Building a virtual robot

In this post, I’ll be describing the first stage in getting a working representation of the, as of yet unnamed robot, into the ROS world. I will also be briefly exploring the MoveIt! library, as this might be a useful tool for the future.

The virtual representation of the Bioloid will be built using CAD models of all the individual servos and support brackets. The robot will be loaded up in RViz, where its links an joints will be manipulated. Inverse kinematics will hopefully be calculated by the MoveIt! package later on. But first, a definition is needed of how the joints and links are all connected, and how to move between their frames of reference. The individual CAD components will be oriented correctly onto these frames.

On a side note, I had the chance to photograph the Bioloid in its current state, and also had a Raspberry Pi 2 delivered!

After some background reading, I found out that the best way to create this representation in ROS is by using a standardised XML format file called the Unified Robot Description Format (URDF).


CAD models

In the past I had made a start on drawing a CAD model of the Robotis servo, but since then Robotis has released CAD drawings of all their robot kits online here (old link).

I tidied up the .stp files using FreeCAD, by removing some placeholder parts or sub-parts which would be of no use (e.g. screws, gears and electronics on the inside of the servos and CM-5). I then converted the files into .dae (Collada) format, and imported them into Blender to add some colour textures. For some reason, saving again as .dae in Blender shrunk the model dimensions by 100. I haven’t worried too much about the actual component size at the moment, as only their relative scales have to be correct for now. There is also a bug in RViz which replaces the ambient color of Collada materials by light grey if at least one component of the specified ambient color is 0. To fix this, I manually edited the file and replaced all 0’s with 0.1 in the “ambient” tags. Below are some of the main components, as rendered in Blender.


The Unified Robot Description Format

The URDF file itself is written in a plain markup language. As the definition of various links is written more conveniently with some basic maths, and because many bits of code will inevitably be repeated or recycled (e.g. the mirroring of the left-hand side based on the right-hand side), ROS has a macro language called xacro, which makes it much easier to create and maintain URDF files. A URDF is generated from a xacro file using:

rosrun xacro xacro.py model.xacro > model.urdf

Creating the file for the Bioloid took a fair while, as I created all the translations by eye without knowing the actual distance measurements between the various links, but rather by relying on the CAD components as each one was placed in the chain. I started with the right side of the model, first with the easier arms, then with the legs.

The validity of the URDF file can be checked with teh check_urdf tool. Another great tool for visualising the final result is urdf_to_graphiz, which generates a diagram of the joint and link tree. The tree of my model is shown below. Each joint (blue circles) is positioned with respect to its parent link (black rectangles), and the following/child link is positioned has the same origin as the joint, as shown here. The xyz and rpy labels next to the arrows show the translation and rotation required to get from parent frame to child frame, or in other words, it is the representation of the child’s frame with respect to the parent’s frame. You will also notice the addition of the IMU link, as well as an additional camera link, although this is just a placeholder for a possible camera in the future, and at the moment won’t be used.

Graphviz diagram of URDF file

Graphviz diagram of URDF file


The robot in RViz

The current state of the robot is shown in the screenshots below. The robot is displayed in RViz with the help of the ROS joint_state_publisher and robot_state_publisher. It is fully articulated and the individual joints can be moved with the help of the GUI which joint_state_publisher provides. The joint states will later be published from the joint values read by the real Bioloid servos. In addition to the kinematic model, I created bounding boxes around the components for the collision boundaries, which are shown in red. This was after the fact I realised that without them, the MoveIt! plugin would use the full CAD geometry in its collision detection routines, which made it almost grind to a halt!


Integration with MoveIt!

I have only played around with MoveIt! briefly so far, but the results seem very promising. The library has a useful graphical setup assistant, which essentially enhances the URDF with a Semantic Robot Description Format (SRDF) file. The URDF only contains information about how the joints and links are arranged, as well as some other information such as joint limits, and the visual and collision data. The SRDF doesn’t replace the URDF, but exists separately and contains other information, such as further self-collision checking, auxiliary joints, groups of joints, links and kinematic chains, end effectors and poses. So far I haven’t found any need to edit the SRDF directly, as it can be generated and edited by the setup assistant.

The assistant generates a new ROS package with various templates for path planning and visualisation, which is done via an RViz plugin. So far I have managed to interact with the virtual Bioloid’s arms and legs, in a similar way shown in this PR2 robot tutorial. The aim will be to later on create some poses and walking gaits which I can try out on the real robot, but that is all for now!


Useful links

URDF:

MoveIt!:

RViz scaling and ambient colour issues:

 

Using the Arduino to read sensor data

Force Sensing Resistors and Inertial Measurement Unit

The first hardware parts to be tested were the inertial measurement unit (IMU) and the force sensing resistors (FSRs). The Arduino IDE was used for programming. The FSRs are straightforward analogue inputs, so I will concentrate on the IMU.

In order to be in line with the ROS coordinate frame conventions, the robot’s base frame will be in this orientation (right-hand rule):

  • x axis: forward
  • y axis: left
  • z axis: up

For now I am assuming the IMU is in the same orientation, but as it is not yet mounted to the robot, it might change later on. Again, in keeping with the ROS conventions on relationships beteen coordinate frames, the following transforms and their relationships have to be defined:

\text{map} \rightarrow \text{odom} \rightarrow \text{base\_link}

All these frames and their relationships can be easily managed using the ROS tf package.
I have added the imu_link to this chain, which represents the IMU’s orientation with respect to the odom frame. An additional link is also added to transform the standard base_link to the robot’s torso orientation. So the transform chain actually looks like this:
\text{map} \rightarrow \text{odom} \rightarrow \text{imu\_link} \rightarrow \text{base\_link} \rightarrow \text{torso\_link}


Calculating roll/pitch/yaw

I initially decided to let the A-Star (Arduino-compatible) board perform the IMU calculations and pass roll/pitch/yaw (RPY) values, and pass them as a ROS message to the PC. A ROS node will read these values and broadcast the transform from odom to imu_link using tf.

The IMU I am using, the MinIMU-9 v3 from Pololu, consists of an LSM303D accelometer-magnetometer chip and an L3GD20H gyro chip in one single package. The accelerometer, magnetometer and gyro are all three-axis types, providing 9 degrees of freedom and allow for a full representation of orientation in space. The data can be easily read using the provided Arduino libraries.

Rotations in Cartesian space will be represented in terms of intrinsic Euler angles, around the three orthogonal axes x, y and z of the IMU. As the order in which rotations are applied is important, I decided to use the standard “aerospace rotation sequence”, which has an easy mnemonic: An aeroplane yaws along the runway, then pitches during takeoff, and finally rolls to make a turn once it is airborne. The rotation sequence is thus:

  • Rotation about z: yaw (±180°)
  • Rotation about y: pitch (±90°)
  • Rotation about x: roll (±180°)

Pitch has to be restricted to ±90° in order to represent any rotation with a single set of RPY values. The accelerometer alone can extract roll and pitch estimates, but not yaw, as the gravity vector doesn’t change when the IMU is rotated about the y axis. This is where the magnetometer (compass) comes in use.

Hence, using these accelerometer and magnetometer values, the IMU’s orientation can be fully described in space. However these measurements are very noisy, and should only be used as long-term estimates of orientation. Also, any accelerations due to movement and not just gravity add disturbances to the estimate. To compensate for this, the gyro readings can be used as a short term and more accurate representation orientation, as the gyro readings are more precise and not affected by external forces.


The complementary filter

Gyro measurements alone suffer from drift, so they have to be combined with the accelerometer/magnetometer readings to get a clean signal. A fairly simple way to do this is using what is known as a complementary filter. It can be implemented in code fairly simply, in the following way:

Roll_{curr} = C_f \times (Roll_{prev} + X_{gyro} \times dt) + (1 - C_f) \times Roll_{accel}
Pitch_{curr} = C_f \times (Pitch_{prev} + Y_{gyro} \times dt) + (1 - C_f) \times Pitch_{accel}
Yaw_{curr} = C_f \times (Yaw_{prev} + Z_{gyro} \times dt) + (1 - C_f) \times Yaw_{magnet}

On every loop, the previous RPY values are updated with the new IMU readings. The complementary filter is essentially a high-pass filter on the gyro readings, and a low-pass filter on the accelerometer/magnetometer readings. C_f is a filtering coefficient, which determines the time boundary between trusting the gyroscope vs trusting the accelerometer/magnetometer, defined as:
C_f = \frac{ t_{const} }{ (t_{const} + dt) }
dt is the loop time, and t_{const} is a time constant, which determines the length of recent history for which the gyro data takes precedence over the accelerometer/magnetometer data, in calculating the new values. I have set this to 1 second, so the accelerometer data from over a second ago will take precedence and correct any gyro drift. A good source of information on the complementary filter can be found here.


Issues with filtering roll/pitch/yaw values

However, when using RPY calculations, applying this filter is not so straightforward. Firstly, when the roll/yaw values cross over the ±180° point, the jump has to be taken into account otherwise it causes large erroneous jumps in the averaged value. This can be corrected fairly simply, however another problem is gimbal lock, which in this case is caused then the pitch crosses the +-pi/2 points (looking directly. This causes the roll/yaw values to jump abruptly, and as a result the IMU estimation “jumps” around the problem points. My attempt at solving this issue by resetting the estimated values whenever IMU was near the gimbal lock position was not very effective.

After a lot of frustration, I decided that learning how to reliably estimate position using quaternions was the way to go, although they may at first seem daunting. So at this point I abandoned any further experimentation using RPY calculations, however the work was a good introduction to the basics of the A-Star board and representing orientations using an IMU. The Arduino code is posted here for reference. Links to tutorials on using accelerometer and gyro measurements can be found in the comments.


Code (click to expand):


// Hardware:
// Pololu A-Star 32U4 Mini SV
// Pololu MinIMU-9 v3 (L3GD20H and LSM303D)
// Interlink FSR 400 Short (x6)

// Important! Define this before #include <ros.h>
#define USE_USBCON

#include <Wire.h>
#include <ros.h>
#include <std_msgs/Int16MultiArray.h>
#include <geometry_msgs/Vector3.h>
#include <AStar32U4Prime.h>
#include <LSM303.h>
#include <L3G.h>
#include <math.h>

// Set up the ros node and publishers
ros::NodeHandle nh;
std_msgs::Int16MultiArray msg_fsrs;
std_msgs::MultiArrayDimension fsrsDim;
ros::Publisher pub_fsrs("fsrs", &msg_fsrs);
geometry_msgs::Vector3 msg_rpy;
ros::Publisher pub_rpy("rpy", &msg_rpy);
unsigned long pubTimer = 0;

const int numOfFSRs = 6;
const int FSRPins[] = {A0, A1, A2, A3, A4, A5};
int FSRValue = 0;
LSM303 compass;
L3G gyro;
const int yellowLEDPin = IO_C7;  // 13
int LEDBright = 0;
int LEDDim = 5;
unsigned long LEDTimer = 0;

float aX = 0.0;
float aY = 0.0;
float aZ = 0.0;
float gX = 0.0;
float gY = 0.0;
float gZ = 0.0;
float accelRoll = 0.0;
float accelPitch = 0.0;
float magnetYaw = 0.0;
float roll = 0.0;
float pitch = 0.0;
float yaw = 0.0;
float accelNorm = 0.0;
int signOfz = 1;
int prevSignOfz = 1;
float dt = 0.0;
float prevt = 0.0;
float filterCoeff = 1.0;
float mu = 0.01;
float timeConst = 1.0;

void setup()
{
    // Array for FSRs
    msg_fsrs.layout.dim = &fsrsDim;
    msg_fsrs.layout.dim[0].label = "fsrs";
    msg_fsrs.layout.dim[0].size = numOfFSRs;
    msg_fsrs.layout.dim[0].stride = 1*numOfFSRs;
    msg_fsrs.layout.dim_length = 1;
    msg_fsrs.layout.data_offset = 0;
    msg_fsrs.data_length = numOfFSRs;
    msg_fsrs.data = (int16_t *)malloc(sizeof(int16_t)*numOfFSRs);

    nh.initNode();
    nh.advertise(pub_fsrs);
    nh.advertise(pub_rpy);

    // Wait until connected
    while (!nh.connected())
        nh.spinOnce();
    nh.loginfo("ROS startup complete");

    Wire.begin();

    // Enable pullup resistors
    for (int i=0; i<numOfFSRs; ++i)
        pinMode(FSRPins[i], INPUT_PULLUP);

    if (!compass.init())
    {
        nh.logerror("Failed to autodetect compass type!");
    }
    compass.enableDefault();

    // Compass calibration values
    compass.m_min = (LSM303::vector<int16_t>){-3441, -3292, -2594};
    compass.m_max = (LSM303::vector<int16_t>){+2371, +2361, +2328};

    if (!gyro.init())
    {
        nh.logerror("Failed to autodetect gyro type!");
    }
    gyro.enableDefault();

    pubTimer = millis();
    prevt = millis();
}

void loop()
{
    if (millis() > pubTimer)
    {
        for (int i=0; i<numOfFSRs; ++i)
        {
            FSRValue = analogRead(FSRPins[i]);
            msg_fsrs.data[i] = FSRValue;
            delay(2);  // Delay to allow ADC VRef to settle
        }

        compass.read();
        gyro.read();

        if (compass.a.z < 0)
            signOfz = -1;
        else
            signOfz = 1;

        // Compass - accelerometer:
        // 16-bit, default range +-2 g, sensitivity 0.061 mg/digit
        // 1 g = 9.80665 m/s/s
        // e.g. value for z axis in m/s/s will be: compass.a.z * 0.061 / 1000.0 * 9.80665
        //      value for z axis in g will be: compass.a.z * 0.061 / 1000.0
        //
        // http://www.analog.com/media/en/technical-documentation/application-notes/AN-1057.pdf
        //accelRoll = atan2( compass.a.y, sqrt(compass.a.x*compass.a.x + compass.a.z*compass.a.z) ) ;
        //accelPitch = atan2( compass.a.x, sqrt(compass.a.y*compass.a.y + compass.a.z*compass.a.z) );
        // Angle between gravity vector and z axis:
        //float accelFi = atan2( sqrt(compass.a.x*compass.a.x + compass.a.y*compass.a.y), compass.a.z );
        //
        // Alternative way:
        // http://www.freescale.com/files/sensors/doc/app_note/AN3461.pdf
        // Angles - Intrinsic rotations (rotating coordinate system)
        // fi: roll about x
        // theta: pitch about y
        // psi: yaw about z
        //
        // "Aerospace rotation sequence"
        // yaw, then pitch, then roll (matrices multiplied from right-to-left)
        // R_xyz = R_x(fi) * R_y(theta) * R_z(psi)
        //
        // Convert values to g
        aX = (float)(compass.a.x)*0.061/1000.0;
        aY = (float)(compass.a.y)*0.061/1000.0;
        aZ = (float)(compass.a.z)*0.061/1000.0;
        //accelRoll = atan2( aY, aZ );
        // Singularity workaround for roll:
        accelRoll  = atan2( aY, signOfz*sqrt(aZ*aZ + mu*aX*aX) );  // +-pi
        accelPitch = atan2( -aX, sqrt(aY*aY + aZ*aZ) );            // +-pi/2

        // Compass - magnetometer:
        // 16-bit, default range +-2 gauss, sensitivity 0.080 mgauss/digit
        // Heading from the LSM303D library is the angular difference in
        // the horizontal plane between the x axis and North, in degrees.
        // Convert value to rads, and change range to +-pi
        magnetYaw = - ( (float)(compass.heading())*M_PI/180.0 - M_PI );

        // Gyro:
        // 16-bit, default range +-245 dps (deg/sec), sensitivity 8.75 mdps/digit
        // Convert values to rads/sec
        gX = (float)(gyro.g.x)*0.00875*M_PI/180.0;
        gY = (float)(gyro.g.y)*0.00875*M_PI/180.0 * signOfz;
        gZ = (float)(gyro.g.z)*0.00875*M_PI/180.0;

        dt = (millis() - prevt)/1000.0;

        filterCoeff = 1.0;
        // If accel. vector magnitude seems correct, e.g. between 0.5 and 2 g, use the complementary filter
        // Else, only use gyro readings
        accelNorm = sqrt(aX*aX + aY*aY + aZ*aZ);
        if ( (0.5 < accelNorm) || (accelNorm > 2) )
            filterCoeff = timeConst / (timeConst + dt);

        // Gimbal lock: If pitch crosses +-pi/2 points (z axis crosses the horizontal plane), reset filter
        // Doesn't seem to work reliably!
        if ( (fabs(pitch - M_PI/2.0) < 0.1) && (signOfz != prevSignOfz) )
        {
            roll += signOfz*M_PI;
            yaw += signOfz*M_PI;
        }
        else
        {
            // Update roll/yaw values to deal with +-pi crossover point:
            if ( (roll - accelRoll) > M_PI )
                roll -= 2*M_PI;
            else if ( (roll - accelRoll) < - M_PI )
                roll += 2*M_PI;
            if ( (yaw - magnetYaw) > M_PI )
                yaw -= 2*M_PI;
            else if ( (yaw - magnetYaw) < - M_PI )
                yaw += 2*M_PI;

            // Complementary filter
            // https://scholar.google.co.uk/scholar?cluster=2292558248310865290
            roll  = filterCoeff*(roll  + gX*dt) + (1 - filterCoeff)*accelRoll;
            pitch = filterCoeff*(pitch + gY*dt) + (1 - filterCoeff)*accelPitch;
            yaw   = filterCoeff*(yaw   + gZ*dt) + (1 - filterCoeff)*magnetYaw;
        }

        roll  = fmod(roll,  2*M_PI);
        pitch = fmod(pitch,   M_PI);
        yaw   = fmod(yaw,   2*M_PI);

        msg_rpy.x = roll;
        msg_rpy.y = pitch;
        msg_rpy.z = yaw;

        pub_fsrs.publish(&msg_fsrs);
        pub_rpy.publish(&msg_rpy);

        prevSignOfz = signOfz;

        prevt = millis();

        pubTimer = millis() + 10;  // wait at least 10 msecs between publishing
    }

    // Pulse the LED
    if (millis() > LEDTimer)
    {
        LEDBright += LEDDim;
        analogWrite(yellowLEDPin, LEDBright);
        if (LEDBright == 0 || LEDBright == 255)
            LEDDim = -LEDDim;

        // 50 msec increments, 2 sec wait after each full cycle
        if (LEDBright != 0)
            LEDTimer = millis() + 50;
        else
            LEDTimer = millis() + 2000;
    }

    nh.spinOnce();
}

 

Robot Revival

The Bioloid story

Many moons ago, I purchased my first humanoid robot, an 18-servo Bioloid Comprehensive Kit. At the time, humanoid robotics for hobbyists was at its early stages, and I chose the Bioloid after much deliberation and comparison with its then main competitors, the Hitec Robonova and Kondo KHR-2HV. I went for the Bioloid mainly because of the generous parts list, which doesn’t limit the design to just a humanoid robot, as well as the powerful AX-12+ Dynamixel servos. These have a number of advantages over the more traditional simple servos, such as multiple feedback options (position, temperature, load voltage, input voltage), powerful torque, upgradeable firmware, internal PID control, continuous rotation option, a comms bus that enables the servos to be daisy-chained … and the list goes on!

After building some of the standard variants trying out the demos, attempting a custom walker, and playing around with Embedded C on its CM-5 controller board, I never got around to actually doing the kit any real justice. I have finally decided to explore the potential of this impressive robot, and make all that money worthwhile!

This post begins one of hopefully many, in which I will detail my progress with the Bioloid robot.


Initial hardware ideas

The general plan for hardware is to extend the base platform with various components, avoiding the need for custom electronic boards as far as possible, as I want my main focus to be on software.
The Bioloid’s servos are powered and controlled by the CM-5 controller, which has an AVR ATMega128 at its core. I have played around with downloading custom Embedded C to the CM-5, but in terms of what I have in mind, it is much more convenient to be able to control the servos directly from a PC. The standard solution is the USB2Dynamixel, however much of this chunky adaptor is taken up by an unnecessary serial port, so I went for a functionally identical alternative, a USB2AX by a company called Xevelabs. The PC/laptop control will hopefully be replaced by a Raspberry Pi 2 Model B (on back order!) for a more mobile solution. I have not thought about mobile power yet!

Despite the impressive servos, the stock Bioloid offers little in terms of sensors. A provided AX-S1 sensor module has IR sensors/receiver, a microphone and a buzzer, all built in to a single package, which resembles on of the servos, and acts as the Bioloid’s head. Although updated controllers by Robotis have emerged over the years, the original CM-5 had no way of directly integrating sensors to it, and was limited to the AX-S1.

Since a bunch of servos without any real-world feedback does not really make a robot, I am going to add a number of sensors to the base robot. The current plan is to use a MinIMU-9 v3 for tilt/orientation sensing, and a number of Interlink FSR 400 Short force sensing resistors on the feet. Very conveniently, the undersides of the Bioloid’s feet have indents in their four corners which perfectly match the shape of the FSRs! A Pololu A-Star 32U4 Mini SV (essentially an Arduino board) will perform the data collection and pass it over to the PC via serial-to-USB.

That is as far as my current considerations go in terms of hardware. At some point I will look into vision, which may be as simple as a normal webcam. I originally thought that an Xbox Kinect would be ruled out because of size, but apparently it can be done!


Initial software ideas

I plan on using ROS (Robot Operating System) as the main development platform, with code in C++. As well as playing around with ROS in the past, its popularity in a large number of robotics projects and large number of libraries makes it a very development platform. Also, I recently discovered the MoveIt! package, which would be great to try out for the Bioloid’s walking gait.

For simplicity, the A-Star will be programmed using the Arduino IDE. I was pleasantly surprised that I wouldn’t have to write any serial comms code to get the sensor data into the ROS environment, as a serial library for the Arduino already exists. ROS is already looking like a good choice!

The A-Star will initially just serve ROS messages to the PC. It may potentially perform other functions if it has the processing power to spare, but for now there is no need. A ROS service running on the PC will be needed to interface with the Dynamixel servos, instructing the servos to move, reading their feedback and publishing the robot’s joint states to various other ROS nodes.

My next post will focus on the new sensor hardware. Until then, please let me know your thoughts and suggestions, all feedback is welcome!