Author Archives: dxydas

Chip-E Remix Challenge entry

Work on the Quadbot has taken a backseat as I have been working on an entry for the Chip-E Remix Challenge on Thingiverse.

My main motivation was the 18 or so servos on the hexapod prize, which would be exactly what I need to continue with the build of the Quadbot! All the robots are interesting though, so any prize would be a win!

My entry to the challenge is … Chip-E-lata! Hope you enjoy.

 

5DOF leg test rig build and gamepad control

The hardware for the test rig has arrived, and the first leg has been set up!

The metal brackets are from Trossen Robotics and largely match the dimensions of their equivalent plastic parts from Robotis. Some parts do not have metal counterparts, so I used spare plastic parts from my Bioloid kit. I also ordered an ArbotiX-M controller (Arduino-based), which communicates with the PC via a spare SparkFun FT232RL I had. The test rig frame is made out of MakerBeam XL aluminium profiles.
Unfortunately, I thought I had more 3-pin cables than I actually did, so I can’t control all the motors just yet. However, I’ve got an Xbox One controller controlling the IK’s foot target position and a simple serial program on the ArbotiX-M which receives the resulting position commands for each motor.

 


The code for both the Python applet and the Arduino can be found on GitHub here.


Some of the more useful snippets of code are shown below:

The following code handles gamepad inputs and converts them to a natural-feeling movement, based on equations of motion. The gamepad input gets converted into a virtual force, which is opposed by a drag proportional to the current velocity. The angles resulting from the IK are sent to the controller via serial port:


class GamepadHandler(threading.Thread):
    def __init__(self, master):
        self.master = master
        # Threading vars
        threading.Thread.__init__(self)
        self.daemon = True  # OK for main to exit even if instance is still running
        self.paused = False
        self.triggerPolling = True
        self.cond = threading.Condition()
        # Input vars
        devices = DeviceManager()
        self.target = targetHome[:]
        self.speed = [0, 0, 0]
        self.inputLJSX = 0
        self.inputLJSY = 0
        self.inputRJSX = 0
        self.inputRJSY = 0
        self.inputLJSXNormed = 0
        self.inputLJSYNormed = 0
        self.inputRJSXNormed = 0
        self.inputRJSYNormed = 0
        self.dt = 0.005

    def run(self):
        while 1:
            with self.cond:
                if self.paused:
                    self.cond.wait()  # Block until notified
                    self.triggerPolling = True
                else:
                    if self.triggerPolling:
                        self.pollInputs()
                        self.pollIK()
                        self.pollSerial()
                        self.triggerPolling = False
            # Get joystick input
            try:
                events = get_gamepad()
                for event in events:
                    self.processEvent(event)
            except:
                pass

    def pause(self):
        with self.cond:
            self.paused = True

    def resume(self):
        with self.cond:
            self.paused = False
            self.cond.notify()  # Unblock self if waiting

    def processEvent(self, event):
        #print(event.ev_type, event.code, event.state)
        if event.code == 'ABS_X':
            self.inputLJSX = event.state
        elif event.code == 'ABS_Y':
            self.inputLJSY = event.state
        elif event.code == 'ABS_RX':
            self.inputRJSX = event.state
        elif event.code == 'ABS_RY':
            self.inputRJSY = event.state

    def pollInputs(self):
        # World X
        self.inputLJSYNormed = self.filterInput(-self.inputLJSY)
        self.target[0], self.speed[0] = self.updateMotion(self.inputLJSYNormed, self.target[0], self.speed[0])
        # World Y
        self.inputLJSXNormed = self.filterInput(-self.inputLJSX)
        self.target[1], self.speed[1] = self.updateMotion(self.inputLJSXNormed, self.target[1], self.speed[1])
        # World Z
        self.inputRJSYNormed = self.filterInput(-self.inputRJSY)
        self.target[2], self.speed[2] = self.updateMotion(self.inputRJSYNormed, self.target[2], self.speed[2])
        with self.cond:
            if not self.paused:
                self.master.after(int(self.dt*1000), self.pollInputs)

    def pollIK(self):
        global target
        target = self.target[:]
        runIK(target)
        with self.cond:
            if not self.paused:
                self.master.after(int(self.dt*1000), self.pollIK)

    def pollSerial(self):
        if 'ser' in globals():
            global ser
            global angles
            writeStr = ""
            for i in range(len(angles)):
                x = int( rescale(angles[i], -180.0, 180.0, 0, 1023) )
                writeStr += str(i+1) + "," + str(x)
                if i  3277) or (i  3277:
                oldMax = 32767
            elif i < -3277:
                oldMax = 32768
            inputNormed = math.copysign(1.0, abs(i)) * rescale(i, 0, oldMax, 0, 1.0)
        else:
            inputNormed = 0
        return inputNormed

    def updateMotion(self, i, target, speed):
        c1 = 10000.0
        c2 = 10.0
        mu = 1.0
        m = 1.0
        u0 = speed
        F = c1*i - c2*u0  # Force minus linear drag
        a = F/m
        t = self.dt
        x0 = target
        # Equations of motion
        u = u0 + a*t
        x = x0 + u0*t + 0.5*a*pow(t, 2)
        # Update self
        target = x
        speed = u
        return target, speed

def rescale(old, oldMin, oldMax, newMin, newMax):
    oldRange = (oldMax - oldMin)
    newRange = (newMax - newMin)
    return (old - oldMin) * newRange / oldRange + newMin


The following Python code is the leg’s Forward Kinematics (click to expand):


def runFK(angles):
    global a
    global footOffset
    global T_1_in_W
    global T_2_in_W
    global T_3_in_W
    global T_4_in_W
    global T_5_in_W
    global T_F_in_W

    a = [0, 0, 29.05, 76.919, 72.96, 45.032]  # Link lengths "a-1"

    footOffset = 33.596

    s = [0, 0, 0, 0, 0, 0]
    c = [0, 0, 0, 0, 0, 0]
    for i in range(1,6):
        s[i] = math.sin( math.radians(angles[i-1]) )
        c[i] = math.cos( math.radians(angles[i-1]) )

    # +90 around Y
    T_0_in_W = np.matrix( [ [  0,  0,  1,  0],
                            [  0,  1,  0,  0],
                            [ -1,  0,  0,  0],
                            [  0,  0,  0,  1] ] )

    T_1_in_0 = np.matrix( [ [ c[1], -s[1],  0, a[1]],
                            [ s[1],  c[1],  0,    0],
                            [    0,     0,  1,    0],
                            [    0,     0,  0,    1] ] )

    T_2_in_1 = np.matrix( [ [ c[2], -s[2],  0, a[2]],
                            [    0,     0, -1,    0],
                            [ s[2],  c[2],  0,    0],
                            [    0,     0,  0,    1] ] )

    T_3_in_2 = np.matrix( [ [ c[3], -s[3],  0, a[3]],
                            [ s[3],  c[3],  0,    0],
                            [    0,     0,  1,    0],
                            [    0,     0,  0,    1] ] )

    T_4_in_3 = np.matrix( [ [ c[4], -s[4],  0, a[4]],
                            [ s[4],  c[4],  0,    0],
                            [    0,     0,  1,    0],
                            [    0,     0,  0,    1] ] )

    T_5_in_4 = np.matrix( [ [ c[5], -s[5],  0, a[5]],
                            [    0,     0, -1,    0],
                            [-s[5], -c[5],  1,    0],
                            [    0,     0,  0,    1] ] )

    T_F_in_5 = np.matrix( [ [  1,  0,  0,  footOffset],
                            [  0,  1,  0,  0],
                            [  0,  0,  1,  0],
                            [  0,  0,  0,  1] ] )

    T_1_in_W = T_0_in_W * T_1_in_0
    T_2_in_W = T_1_in_W * T_2_in_1
    T_3_in_W = T_2_in_W * T_3_in_2
    T_4_in_W = T_3_in_W * T_4_in_3
    T_5_in_W = T_4_in_W * T_5_in_4
    T_F_in_W = T_5_in_W * T_F_in_5


The following Python code is the leg’s Inverse Kinematics (click to expand):


def runIK(target):
    # Solve Joint 1
    num = target[1]
    den = abs(target[2]) - footOffset
    a0Rads = math.atan2(num, den)
    angles[0] = math.degrees(a0Rads)

    # Lengths projected onto z-plane
    c0 = math.cos(a0Rads)
    a2p = a[2]*c0
    a3p = a[3]*c0
    a4p = a[4]*c0
    a5p = a[5]*c0

    j4Height = abs(target[2]) - a2p - a5p - footOffset

    j2j4DistSquared = math.pow(j4Height, 2) + math.pow(target[0], 2)
    j2j4Dist = math.sqrt(j2j4DistSquared)

    # # Solve Joint 2
    num = target[0]
    den = j4Height
    psi = math.degrees( math.atan2(num, den) )
    num = pow(a3p, 2) + j2j4DistSquared - pow(a4p, 2)
    den = 2.0*a3p*j2j4Dist
    if abs(num) <= abs(den):
        phi = math.degrees( math.acos(num/den) )
        angles[1] = - (phi - psi)

    # Solve Joint 3
    num = pow(a3p, 2) + pow(a4p, 2) - j2j4DistSquared
    den = 2.0*a3p*a4p
    if abs(num) <= abs(den):
        angles[2] = 180.0 - math.degrees( math.acos(num/den) )

    # # Solve Joint 4
    num = pow(a4p, 2) + j2j4DistSquared - pow(a3p, 2)
    den = 2.0*a4p*j2j4Dist
    if abs(num) <= abs(den):
        omega = math.degrees( math.acos(num/den) )
        angles[3] = - (psi + omega)

    # Solve Joint 5
    angles[4] = - angles[0]

    runFK(angles)


The following Arduino code is the serial read code for the Arbotix-M (click to expand):


#include 

int numOfJoints = 5;

void setup()
{
  Serial.begin(38400);
}

void loop()
{
}

void serialEvent() {
  int id[numOfJoints];
  int pos[numOfJoints];
  while (Serial.available())
  {
    for (int i = 0; i < numOfJoints; ++i)
    {
      id[i] = Serial.parseInt();
      pos[i] = Serial.parseInt();
    }
    if (Serial.read() == '\n')
    {
      for (int i = 0; i  0) && (0 <= pos[i]) && (pos[i] <= 1023 ) )
          SetPosition(id[i], pos[i]);
      }
    }
  }
}

Quadbot Inverse Kinematics

Following from the previous post, the Inverse Kinematics (IK) have been calculated, just in time for the test rig hardware, which is arriving this week.

I found a geometrical solution to the IK by breaking the 3D problem down into two 2D problems:

Looking at the leg from the back/front, only joints 1 & 5 determine the offset along the world Y axis. The joints are equalised so that the foot is always facing perpendicular to the ground.

Looking at the leg sideways, the rest of the joints 2, 3 & 4 are calculated using mostly acos/atan2 functions. Note that the leg link lengths need to be projected onto the z-plane, which is why the a2p, a3p etc. notation is used.

DrawingForwards

Trigonometry based on front view.

DrawingSide

Trigonometry based on side view.

The test program now has controls for positioning a target for the foot to follow, based on the IK. This was very helpful for testing and debugging. A test function was also added which simply moves the target in an elliptical motion for the foot to follow. It is a very simple way of making the leg walk, but could be used as the foundation for a simply four-legged walking gait.

Quadbot 17 Kinematics_003

Kinematics test program updated with target visual and sliders.

testIKEllipse

Test of IK using elliptical motion target which the foot (F) follows.

The next tasks are to build the physical test rig, learn how to use the Arbotix controller and port the IK code to Arduino.


The kinematics are written in Python/TKInter, and code can be found on GitHub.

The geometrical drawings were made using the browser-based GeoGebra app.

Quadbot Forward Kinematics

The Forward Kinematics for the left leg of the Quadbot have been formalised, using modified Denavit-Hartenberg parameters and axes conventions.

I also made a simple Python applet to verify the maths and visualise the leg’s poses. I used Tkinter and three Canvas widgets to show orthogonal views.

The reason I am testing the maths in a quick Python program is that I want to be able to port them easily over to Arduino, as my latest aim is to drop the Raspberry Pi and A-Star 32U4 LV Pi expansion module (shown in some of the latest CAD models) in favour of trying out an ArbotiX controller. A benefit with the latter is that I wouldn’t need a Dynamixel-to-USB converter (e.g. USB2AX) or separate motor power supply.

Next up will be to work out the Inverse Kinematics.

  Link
Twist
Link
Length
Link
Offset
Joint
Angle
j alpha_i-1 a_i-1 d_i theta_i
1 0 0 0 th_1
2 pi/2 29.05 0 th_2 – 34
3 0 76.919 0 th_3 + 67.5
4 0 72.96 0 th_4
5 -pi/2 45.032 0 th_5

D-H Parameters

Quadbot 17 Kinematics_001

Quadbot kinematics applet, zeroed position

Quadbot 17 Kinematics_002

Quadbot kinematics applet, test position using sliders

Quadbot updates and sensor options

Updates

A few and bits and pieces have been added to the model, along with some updates: Longer lower leg and cover, battery and battery compartment in rear body, main electronics boards, foot base and plate, and two ideas for sensors.

The lower legs were extended as initially the “knee” and “ankle” joints were too close. I think the new arrangement gives the leg better overall proportions.

As the battery pack has significant size and weight, its best to include in the design as early as possible. Originally neglected, I have now added a Turnigy 2200 mAh battery, and updated the rear bumper and bracket to accommodate it. Heat dissipation may be an issue, but I’ll leave it like this for now.

I have also measured the placement of the actuators in order to start thinking about the maths for the kinematics.

Some images of current progress:


Sensors

I have tried two ideas for area scanners which could be the main “eyes” of the robot. One is the Kinect v2, and the other a Scanse Sweep.

The main advantages of the Sweep is that it is designed specifically for robotics, with a large range and operation at varying light levels. On its own it only scans in a plane by spinning 360°, however it can be turned into a spherical scanner with little effort. Added bonuses are a spherical scan mounting bracket designed specifically for Dynamixel servos, as well as ROS drivers! It is currently available only on pre-order on SparkFun.

The Kinect has a good resolution and is focused on tracking human subjects, being able to track 6 complete skeletons and 25 joints per person. However it only works optimally indoors and at short ranges directly in front of it. It is however significantly cheaper than the Sweep.

Below is a table comparing the important specs:

XBOX Kinect v2

Scanse Sweep

Technology

Time-of-Flight

LiDAR

Dimensions (mm)

249 x 66 x 67

65 x 65 x 52.8

Weight (kg)

1.4

0.12

Minimum Range (m)

0.5

~0.1

Maximum Range (m)

4.5

40

Sample rate (Hz)

30

1000

Rotation rate (Hz)

N/A

1-10

FOV (°)

70 x 60

360 (planar)

Resolution

512 x 424, ~ 7 x 7 pixels per °

1 cm

Price (£)

280

80 (32 for adaptor)

Sources:


WordPress tip: One thing I really like about WordPress.com is that there are always ways around doing things that initially only seem possible with WordPress.org. Need to add a table into your post? Use Open Live Writer, make the table then copy-paste the table’s generated HTML source code!


Hardware costs

The current estimated hardware costs are quite high, at around £2100. However about half this budget (£955) is due to the fact that I calculated the costs for the custom 3D printed parts by getting quotes from Shapeways. Getting them printed on a homemade 3D printer would reduce the cost significantly. The other large cost is naturally the 22 AX actuators at £790.

For anyone interested, the preliminary BOM can be downloaded here:

Quadbot 17 BOM – 2017-02-26.xlsx


That’s it for now!

Quad-legged robot ideas

My humanoid robot updates have temporarily taken a backseat due to some other distractions, such as the FPV quadcopter, and now this! Updates on the Bioloid will still continue however, as I am far from done with it.

So as another side-project, I’ve been thinking about trying to build a custom robot from the ground-up, rather than basing it on an existing platform like the Bioloid.


CAD software

I have been using FreeCAD for the Bioloid project, which is a great free tool but somewhat hard to use and lacking many of the features found in modern CAD software.

As I need to design a large number of new parts, I’ve been looking around for a good CAD software with most or all of the following key features:

  • free, or at least licenced for hobbyist use
  • parametric modelling
  • export to STL
  • 3D printing support tools a plus, but not necessary
  • nice renderer?

Autodesk’s line of tools seem to meet these requirements very well. Autodesk 123D seemed like an excellent choice, with more advanced features than its sibling Tinkercad, a CAD program which runs inside a web browser. However, Autodesk has recently been restructuring its suite of tool according to this announcement, so this lead me to check out Fusion 360.

It has a very flexible licensing model, which means it can be used for free as a student, educator, start-up and most importantly, as an enthusiast (non-profit). I have only been using it for a few days, but so far it seems impressive. It works very similar to SolidWorks, and has a number of useful features such as direct integration with various 3D printing tools and services, and an easy-to-use renderer. One thing that may be seen as a downside is the fact that everything is stored on the cloud, but local backups are possible.

 


Components & 3D printing

The main design of the robot will be centred around the use of Dynamixel’s AX range of servos, as they are the most competitively priced motors I’ve found for the power and features they offer. Most other high-torque servos are prohibitively expensive, considering I will need about 16 and each costs over £100!

The exact model will probably be the AX-12A, which is an improved version of the AX-12+ used on the Bioloid. I might be able to stretch to the faster, more expensive AX-18A, however as their external design is identical, any frames used will be compatible with both.

 

For the basic servo joints I will be using a combination of the plastic frames in the Robotis range, as well as possibly some metal frames by Trossen Robotics. The rest of the robot will be designed with 3D printed parts in mind. Whether I go for an expensive online 3D printing service or try and revive my 3D printer remains to be seen.


Inspiration:

These are a few images as well as designs by other hobbyists which I am using as inspiration:

Onwards and upwards

It’s been a while since the last update on the Bioloid robot, and that has mainly been due to a recent distraction in the form of a racing quadcopter!

This is my first build or a drone/quad, and I wanted to create if from parts rather than a kit in order to learn more about the current technologies and options available.


The quad

I have chosen a 210 mm X-frame quad, which on hind-sight is maybe slightly too small for a beginner. Perhaps I should have gone for the larger 250 mm, but the 210 hasn’t turned out too bad, other that the limited space for electronics and the steep learning curve in flying!

For the flight controller, I originally used an AfroFlight Naze32 Rev.6 10 DOF, but then switched to an SPRacingF3 mini, as I couldn’t run the XSR receiver in SBUS mode, use Smart Port telemetry (in SoftSerial mode) and control the LED strip all at the same time with the Naze32. The SPRacingF3 also has a much faster CPU and more expansion options for future add-ons, but it comes at a much higher price!

The receiver/transmitter are both from FrSky. The Taranis transmitter is a great piece of equipment and so far seems like a good investment in general. Perhaps I will write another post focusing more on this in the future.

Learning about the various comms protocols (PPM, SBUS, SmartPort etc.) and how to wire up the electronics was an interesting challenge, since I was using a mix of hardware and most of the information was scattered around the net. However, once the hardware was connected I was surprised at how easy it was to configure the software. Both the controllers I’ve used supported Cleanflight, which is installed a Chrome browser extension, and getting it to communicate and control the FC was very straighforward.

As of now I have only flown line-of-sight a handful of times, and it is definitely a challenge, particularly as the sensitive throttle makes the altitude very tricky to control. Then again I have only been using the default PID controller settings, so tuning them may help.

A first-person view (FPV) camera and viewing system is definitely the top priority for future upgrades!


Pictures

 


Components

  • Chassis, main electronics & rotors
    • Frame: Fossils Stuff FSGX 210
    • Flight Controller (FC): Seriously Pro Racing F3 Mini
    • Motors: Cobra 2204-2300KV Brushless
    • Electronic Speed Controllers (ESCs): FVT Little Bee 20 Amp Pro OPTO
    • Propellers: Bullnose 5×4.5
  • Comms
    • Transmitter: FrSky Taranis X9D Plus
    • Receiver: FrSky XSR
  • Lighting & power
    • LED strip: Lumenier Digital RGB Arm LED Board
    • Power Distribution Board (PDB): CC3D
    • Battery: Drone Lab 1500mah 4s 50-100c 14.8V LiPo

Tweaking PIDs

Following from my previous post on PID control, I have integrated my old test PID code into a more usable form in the GUI. I will quickly describe what the old test balancer did, and then the recent updates:

I started with the notes from this useful article as a baseline, but then built a very simple C++ class around it. I have also included some fairly standard modifications to the “vanilla” (ideal form) PID controller:

  • Calculating the derivative of the process variable, rather than of the error, to avoid large and sudden outputs in response to a large increase in error.
  • Prevention of integral windup by constraining the integral term and controller output between a minimum and maximum value.

I then started adding additional features built around the controller, which are:

  • Derived class which works on the standard form of the PID controller
  • Qt “widget” class, which displays a number of spin boxes and sliders for controlling the three gains and the output limits of the PID (as well as the min/max limits of the user controls)
  • A graph to trace the set point (SP) and process variable (PV) against time
  • A procedure which starts/stops the PID loop for balancing, in a separate thread (more on this below)

The aim is to control the forward/backward balance of the robot on a slanting surface. The balancing works by adjusting the ankle motors to match the pitch of the IMU link (see older posts here and here). For example, if the robot is on a board which lowered downwards by 5 degrees, the ankles will also point downwards by 5 degrees. The motor’s position is directly controlled by the IMU pitch, and the PID is actually controlling the moving speed with which the motors will reach that position. Unfortunately it’s not possible to do torque or direct speed control with the AX-12 motors.

With only some limited manual tuning, the initial results are fairly good for small and slow movements of a slopes surface, but it will need some tuning to stop the robot from oscillating at higher speeds and falling over. I will continue improving this method, possibly by using more PIDs to control the knee and hip motors at the same time. For now, here is an image of what it looks like running in the GUI:

Screenshot from 2016-07-20 23-37-37


Some useful links on the basics of PID control

Balancing ideas

I had briefly tried in the past a simple balancing function using a PID controller, which aims to balance the Bioloid using the ankle motors, by trying to keep the IMU (accelerometer/gyro) vertical. The result was mixed success, but was only an early test. I am considering revisiting this balancing test, but this time using a number of PID controllers to control multiple groups of leg motors(e.g. hips, knees and ankles), while also using the GUI to make testing faster and easier.

 

On another note, I recently came across the Nengo Neural Simulator, which is a framework for creating neural networks of leaky integrate-and-fire (LIF) neurons for creating complex computational models. It has been used to create Spaun, the world’s largest functional brain model which is able to perform a number of functions such as vision, memory, counting, as well as drawing what is sees by controlling a simple arm.

What stands out is how easy it is to use the Nengo GUI to build neural networks. The interface runs in the browser and visualisations of neuron spiking activity and other metrics are easily shown for each graphical object. There is also support for scripting in Python. Installing it and trying it out for yourself is pretty simple, just follow the Getting Started guide here.

 

It would be really interesting to see if some form of PID controller could be built using Nengo, and then used to control the Bioloid’s balancing!

Qt Style Sheets and C++

This is a quick post to show some more updates to the styling of the GUI.

I have been experimenting with customising the look-and-feel of my GUI using Qt Style Sheets (QSS) which are closely related to HTML Cascading Style Sheets (CSS).

Customised widgets

I have so far customised most of the widgets which appear in the GUI, as shown in the following examples. I have chosen a blue/grey theme throughout, with some exceptions for specialised widgets and items.

Code

Initially the style sheets were embedded in QStrings inside the code, but they were dotted around various classes which meant a lot of code was duplicated as I kept adding content. I then moved the style sheets (QSS) for each widget to a separate text file, and set them via each class individually. Finally I found a way of greatly simplifying this by putting all the code into one single file. This is applied through the program’s main window (QMainWindow). Exception widgets use custom QSS Selector ID, which is set with setObjectName(). This is a nice way of applying specialised styles to specific widgets, such as e.g. a button that needs to stand out.

Here are two QSS examples of the standard and special button shown earlier:

QPushButton {
    background-color: qlineargradient(x1: 0, y1: 0, x2: 0, y2: 1, stop: 0 lightsteelblue, stop: 1 steelblue);
    border-color: #8F8F91;
    border-style: outset;
    border-width: 4px;
    border-radius: 10px;
}
QPushButton:flat {
    border: none; /* no border for a flat push button */
}
QPushButton:default {
    border-color: royalblue; /* make the default button prominent */
}
QPushButton:pressed {
    background-color: qlineargradient(x1: 0, y1: 0, x2: 0, y2: 1, stop: 0 royalblue, stop: 1 dodgerblue);
    border-style: inset;
}
QPushButton#redPushButton {
    color: white;
    background-color: red;
    border: solid lightgrey;
    border-style: outset;
    border-width: 4px;
    border-radius: 4px;
}
QPushButton#redPushButton:flat {
    border: none;  /* no border for a flat push button */
}
QPushButton#redPushButton:default {
    border-color: grey;  /* make the default button prominent */
}
QPushButton#redPushButton:pressed {
    color: red;
    background-color: darkred;
    border: solid red;
    border-style: inset;
    border-width: 4px;
    border-radius: 4px;
}

As usual, the latest source code is available on GitHub if you want to have a look.

Motor dials updated

I have made some updates to the motor dials which control the motor positions. They can now change mode and control motor speed and load. Also, the GUI is regularly updated with some important feedback from each motor: motor voltage and temperature, LED and torque on/off state, and feedback on all the alarm states.

At this point I’m starting to think that an internal model of all the motor control table data would be useful at this point! Rather than classes making direct requests to the motor controller to receive motor information, all the state data could be kept by the controller and updated regularly. Classes would then simply get the latest data from this model when needed. This is however partially the way the controller works already, as it has a model of the ROS-style joint states which hold present positions, present speeds and present torques (loads), as well as goal positions, moving speeds and torque limits. The joint states are published continuously by a ROS publisher. Present and goal positions are the most important data, as the AX-12 by default only performs position control. Moving speed is simply the speed that the motor will use to move between positions, so cannot be used for e.g. a velocity feedback loop. “Torque” is a bit of a misleading term here, as there is no torque sensing in the motors. Torque sensors are only available in much more expensive motors than these. The load values reported by the AX-12 are related to the motor current, but cannot be read while the motor is actually moving. Two notable sources which have more detailed information on this somewhat unclear measurement can be found here on the RoboSavvy forum and here.

 

I think I’m done with updating these graphical widgets for the time being, as it is detracting from the main goals of exploring MoveIt! and getting the robot walking.

Sensor grapher

A sensor plotting window has now been added to the GUI, which shows all data from the IMU and pressure sensors. The accelerometer, magnetometer, gyroscope, heading data and Force Sensing Resistors’ (FSRs) data are all published as ROS messages as shown in this post, so reading them in the Qt GUI is fairly straightforward, in a similar way to how the joint states are being read. The graphs are made using a third party library for Qt called QCustomPlot.

Each graphs show a scrolling 10 second window of buffered data, which can be paused/played. With QCustomPlot it’s easy to enable user interactions with graphs (drag axis ranges with mouse, zoom with mouse wheel, etc.), so I enabled this option whenever the graph is in a paused state.

The y-axis units are currently showing raw data, which I will probably update to show standard values.

Screenshot from 2016-04-03 19-46-07.png

A useful thing I found in Qt with QDockWidget, which is used to create dockable/floating sub-windows, is that these widgets can also be tabbed to save space on the screen. How can this be done in code you may ask? The useful functions I found were: setDockNestingEnabled() (or setDockOptions()), tabifyDockWidget() for QMainWindow, and raise() to select the default tabbed dock widget you want displayed.

That’s pretty much all there is to the sensor grapher. I might add more features to it in the future, but for now it does the basics!

List of ideas

As I have many ideas floating about on what to work on next, I made a list to see try and see which ones are worth prioritising:

  • GUI updates
    • Graph for visualising sensor data
    • Improved motion editing
  • Static balancing
  • Implement MoveIt! trajectory following via GUI
  • Walking routines
  • Advanced movement: Trajectories generate from MoveIt!, combined with active balancing as the robot moves

This is just an initial list, which I hope to expand on in the future.

My immediate plans right now are to continue tidying up some GUI graphical details, implement a simple sensor data plotter (I know rqt_plot does a fairly good job of this already, but I’d like to have sensor data integrated in my GUI) and get back to trajectory generation tests using MoveIt!

More software updates

With the new PC’s development tools up and running, I’ve made a number of updates to the GUI side of things, as well as the background joint controller node. Here are some of the most important from this month.


Joint controller updates

An issue I was having with the joint controller node – a main function of which is to act as a ROS wrapper class around the Dynamixel API – was that I seemed to be getting erroneous values when executing a simple one-off read from the motors. I suspected this was related to the fact that the node is also constantly reading the motors’ feedback values as fast as possible in its main loop. Even though the Dynamixel library seems to have checks for a busy comms bus, there was likely some issue with multiple threads trying to access the bus. This was easily fixed by using ros::spinOnce() in the main loop instead of using an AsyncSpinner which starts up a separate thread.

There is still some problem where positional values for some of the motors seem to be getting constantly corrupted, but only when all the torques are enabled and the motors are struggling to achieve all their goals positions (making the typical whining noise that servos have). I’ve yet to narrow down if this is a software issue or not.

Update to read/write services

The ROS read/write services have been simplified now, and two of the most useful commands are the ones that perform a sync read and write across a number of motors. As these are ROS services they can also be called from a terminal command line. For example, to receive the current position, current speed and current torque of motors 1, 3 and 5 all at once, you can run:

rosservice call /ReceiveSyncFromAX '[1, 3, 5]' 36 3

36 is the start address for the low byte of the present position, and 3 indicates the number of control table values, including the start address, to read for each motor. This is calling the sync_read function that the USB2AX offers, via a ROS service.

In a similar way you can write goal position (100), goal speed (300) and max torque (512) to motors 1, 3 and 5 all at once by running:

rosservice call /SendSyncToAX '[1, 3, 5]' 30 '[100, 300, 512, 100, 300, 512, 100, 300, 512]'

One restriction is that the sync read and write functions can only read/write consecutive control table addresses, as explained in the USB2AX link above, as well as here in the Robotis documentation.


New motor control table value editor

On the GUI side of things, I’ve made a new “motor address editor” as an improvement on the previous “motor value editor”. The new editor allows reading/writing of all control table addresses of the AX-12. You don’t have to worry about writing low/high bytes to the two-byte values, as that is handled by the editor; simply write any valid value to the parameter of interest.

I have kept the old editor as two useful features it has is let me easily write the same value to all motors, as well as send position/speed/torque in “standard” units (rad, rad/s and torque %) instead of raw values.


Recording and executing poses

I added a new test function which moves the robot through a sequence of queued poses that are saved in the GUI. The robot pauses between each sequence based on each pose’s specified delay time.

Because the function’s execution has to pause and wait for the specified dwell time time, I added the code into a separate thread. Originally I did this by subclassing QThread, but then updated it and created a “worker object” which is moved to a thread using moveToThread(), as this seems to be the better way in Qt.

Although this is very simplistic motion editor functionality which has been done many times before by other tools, I thought it would be useful to add to my GUI, as it could prove useful in the future as a complement or even a replacement of the MoveIt! ideas if they don’t work out. This GUI is now essentially a one-stop shop for easily testing various ROS functionality and other ideas as I keep on developing along the way!

New development PC

Progress has been slow as my laptop is having overheating problems, so running ROS for any extended periods was getting difficult. For this reason I have almost finished setting up a Linux development environment on my main PC which is more than capable of running ROS and all the graphical tools. I will keep using the laptop for less intensive tasks and for development away from home.

I am using Linux Mint 17.3 Rosa 64-bit on the PC vs Linux Mint 17 Qiana 32-bit on the laptop (laptop had Wi-Fi driver issues with 64-bit version), but still sticking with ROS Indigo instead of upgrading to the more recent Jade, to avoid any surprises.

Installation of ROS was straightforward as with the laptop, and follows the Ubuntu installation instructions, as Mint is built on Ubuntu. The only edit I had to make was change the repository name to use the Ubuntu Trusty codename instead of the Mint Qiana codename, so the repository name looks like this:

deb http://packages.ros.org/ros/ubuntu trusty main

I use Dropbox to keep everything in sync and share the same catkin workspace between the two machines. The only minor annoyance is having to delete the build and devel folders and rebuilding my packages when switching between machines, because of the 32-bit/64-bit difference.

Fitting the foot force sensors

I finally got round to installing the force sensing resistors to the underside of the Bioloid’s feet. I am using the FSR 400 Short model, with male end connectors, which are small and flat and didn’t fit in the square female header pins which often come at the end of breadboard wires. I didn’t want to solder directly as I’ve damaged these sensors in the past, so I followed a suggestion on the Adafruit website and ended up using these PCB terminal blocks.

The force sensing element of the FSR fits nicely in the pre-existing cut-out on the underside of the Bioloid foot plate, and the pins are fed through a hole in the plate so they can be connected on the topside. However, in the standard Bioloid humanoid configuration, the foot plates are attached at an offset from the ankle angle brackets, which means there is not enough space between the footplate and the bracket to connect the pins. To fix this, I moved the footplates so that the ankle brackets sit along their centres. There is still some free space between the footplates when the Bioloid sits in its resting position, so hopefully the feet will not collide with each other after this change.

For completeness, I also made changes to the CAD model and also added FSR CAD models to the feet, although that is just a cosmetic addition to the model (on a related side-note, I also found a CAD model of the USB2AX which adds to the realism)! The robot’s URDF files and CAD models have been updated on GitHub.

Here are some pictures of the results: