Category Archives: Quadrobot project

Body/spine and improved leg kinematics

 

The Python test program has been updated to include the additional spine joints, transformation between the robot and world coordinates, and leg targets which take orientation into account.

This test script is used in anticipation for controlling the actual robot’s servos.

 


Spine joints

The “spine” consists of two joints which will allow the front of the robot to pitch and yaw independently of the rear. This will give it more flexibility when turning and handling uneven terrain, as well as other tasks such as aiming its sensors at the world.

Since the spine joints are quite simple, I don’t think there is any need to create IK for this section.

 

Main Assembly v114 Spine

The “spine” joints separate the body of the quadruped between mostly similar front and rear sections.

 


Body and spine orientation

The robot body now takes into account that it has to be oriented w.r.t. the “world”. This will be physically achieved by the information acquired by an IMU sensor. If the robot is tilted forwards, the targets for the legs will have to be adjusted so that the robot maintains its balance.

I have defined the kinematics in a way that if the the robot was to rotate w.r.t. the world, the whole body rotates (this can be achieved by moving the test Roll/Pitch/Yaw sliders). However if the servo joints of the spine are moved (test joint 1 / joint 2 sliders) the rear section of the robot will move w.r.t the world, and the rear legs will move with it, while the front section won’t change w.r.t. the world.

In order to achieve this, the leg IK had to be updated so that now the base frames of the front legs are linked to the front section of the robot, and the base frames of the rear legs are linked to the rear section.

You might notice while orientation will be defined by an IMU, pure translation (movement in XYZ) in the world is not considered for now, as it is meaningless without some sort of localisation capability in place. This could be achieved by a sensor (see below), but is an entirely separate challenge for a long way down the line (hint: SLAM).

Quadbot 17 Quad Kinematics_004

New leg targets: Foot roll/pitch can be attained (within limits). In addition, the robot base can be positioned with respect to an outside world frame.

Quadbot 17 Quad Kinematics_002

Original leg targets: The feet are always pointing vertical to the ground.

 


Target roll and yaw

Initially, the leg target was simply a position in 3D for the foot link, and the foot was always pointing perpendicular to the ground, which made the inverse kinematics fairly easy. In version 2, the target orientation is now also taken into account. Actually, the pitch and roll can be targeted, but yawing cannot be obtained, simply because of the mechanics of the legs. Yawing, or turning, can be done by changing the walking gait pattern alone,  but the idea is that the spine bend will also aid in steering the robot (how exactly I don’t know yet, but that will come later!).

Getting the kinematics to work were a bit trickier than the original version, mainly because the “pitching” orientation of the leg can only be achieved by the positioning of joint 4, whereas the “rolling” orientation can only be achieved by the positioning of joint 5. The available workspace of the foot is also somewhat limited, in part due to that missing yaw capability. Particularly at positions when the leg has to stretch sideways (laterally) then certain roll/pitching combinations are impossible to reach. Nevertheless, this implementation gives the feet enough freedom to be placed on fairly uneven surfaces, and not be constrained to the previously flat plane.

The next challenge that follows from this, is how can realistic target positions and orientations be generated (beyond predetermined fixed walking gaits), to match what the robot sees of the world?

To answer this, first I need to decide how the robot sees the world: Primarily it will be by means of some 3D scanner, such as the ones I’ve looked into in the past, or maybe the Intel RealSense ZR300 which has recently caught my attention. But this alone might not be sufficient, and some form of contact sensors on the feet may be required.

The big question is, should I get a RealSense sensor for this robot ??? 🙂

 


 

Updated code can be found on GitHub (single-file test script is starting to get long, might be time to split up into class files!).

 

 

Second leg assembly and painting

With enough motors and brackets to build a second leg, the hardware build continues! I have spray-painted all the metal brackets to go with an all-blue colour scheme. The Robotis plastic brackets were hard to source online, so I got them printed by Shapeways.

I re-purposed the test rig frame used for the single leg to make a platform for the two legs. It’s made out of MakerBeam XL aluminium profiles which are very easy to change around and customise to any shape. This base will work well until I get the rest of the plastic parts 3D printed and the metal parts cut.

I also had enough parts and motors to assemble the 2-axis “spine”, but the main frame is not yet built so it that part is on the side for now.

Here are a few photos of the build:

In the next post I will concentrate on software updates to the leg and spine kinematics.

Kinematics x4

I have updated the quadruped kinematics program to display all four legs and calculate their IK. I have also started working on the ways various walking gates can be loaded and executed. I found an interesting creeping walk gait from this useful quadruped robot gait study article, and replicated it below for use with my robot:


Creep_Gait


As you can see, the patterns are replicated in quadrants, in order to complete a full gait where each leg is moved forward once. In my test program, I use the up/down and forward/back position of each leg, to drive the foot target for each leg, as was done previously with the GUI sliders and gamepad.

The lateral swing of each leg (first joint) is not changed, but this can be looked at later. The “ankle” (last two joints) is controlled such that the leg plane is always parallel to the ground.

This is what the current state of the program looks like:


Quadbot 17 Quad Kinematics_002


The foot target values are loaded from a CSV into the Python program, and the IK is run for each leg, going through the whole gait sequence:


testQuadGait


I have also added the option to interrupt and run another gait sequence at any point. The reason for this is to try and experiment how to best switch or blend from one gait to another. For now, if a new gait is loaded, the program will stop the current gait, and compare the current pose to all poses in the new gait. The new gait will then start at the pose which most closely matches the current pose, by using a simple distance metric.
If the 8 values of up/down/fwd/back for all legs are in an array LastPose for the last pose the robot was in, and CurrPose for the current pose of interest from the new gait, then the pseudocode looks something like this:

DistanceMetric = 0
for i = 0 to 7
{
  d[i] = abs(CurrPose[i] - LastPose[i])
  if d[i] > threshold
    d[i] += penalty

  DistanceMetric += d[i]
}

If the distance in any particular direction is larger than some threshold, then an arbitrary penalty value can be added. This will bias the calculation against outliers: a pose with evenly distributed distances will be preferred over a pose with an overall small distance but large distance in one direction. This may not actually be of much use in practice, but can be tweaked or removed later.

The above pose switching idea will be expanded on, so that the robot can seemlessly blend between predefined walking gaits, e.g. when in order to turn left and right or speed up and down.

The next step is to start porting all this test code onto the Arbotix, which has a few minor challenges: Ideally the IK matrix operations need to be done efficiently without extra overhead from additional libraries. The gait values which are loaded from a CSV file need to be hard-coded, however this should be simple to do, since as shown above a gait uses the same target values rearranged across quadrants.


Until next time!

5DOF leg test rig build and gamepad control

The hardware for the test rig has arrived, and the first leg has been set up!

The metal brackets are from Trossen Robotics and largely match the dimensions of their equivalent plastic parts from Robotis. Some parts do not have metal counterparts, so I used spare plastic parts from my Bioloid kit. I also ordered an ArbotiX-M controller (Arduino-based), which communicates with the PC via a spare SparkFun FT232RL I had. The test rig frame is made out of MakerBeam XL aluminium profiles.
Unfortunately, I thought I had more 3-pin cables than I actually did, so I can’t control all the motors just yet. However, I’ve got an Xbox One controller controlling the IK’s foot target position and a simple serial program on the ArbotiX-M which receives the resulting position commands for each motor.


The code for both the Python applet and the Arduino can be found on GitHub here.


Some of the more useful snippets of code are shown below:

The following code handles gamepad inputs and converts them to a natural-feeling movement, based on equations of motion. The gamepad input gets converted into a virtual force, which is opposed by a drag proportional to the current velocity. The angles resulting from the IK are sent to the controller via serial port:


class GamepadHandler(threading.Thread):
    def __init__(self, master):
        self.master = master
        # Threading vars
        threading.Thread.__init__(self)
        self.daemon = True  # OK for main to exit even if instance is still running
        self.paused = False
        self.triggerPolling = True
        self.cond = threading.Condition()
        # Input vars
        devices = DeviceManager()
        self.target = targetHome[:]
        self.speed = [0, 0, 0]
        self.inputLJSX = 0
        self.inputLJSY = 0
        self.inputRJSX = 0
        self.inputRJSY = 0
        self.inputLJSXNormed = 0
        self.inputLJSYNormed = 0
        self.inputRJSXNormed = 0
        self.inputRJSYNormed = 0
        self.dt = 0.005

    def run(self):
        while 1:
            with self.cond:
                if self.paused:
                    self.cond.wait()  # Block until notified
                    self.triggerPolling = True
                else:
                    if self.triggerPolling:
                        self.pollInputs()
                        self.pollIK()
                        self.pollSerial()
                        self.triggerPolling = False
            # Get joystick input
            try:
                events = get_gamepad()
                for event in events:
                    self.processEvent(event)
            except:
                pass

    def pause(self):
        with self.cond:
            self.paused = True

    def resume(self):
        with self.cond:
            self.paused = False
            self.cond.notify()  # Unblock self if waiting

    def processEvent(self, event):
        #print(event.ev_type, event.code, event.state)
        if event.code == 'ABS_X':
            self.inputLJSX = event.state
        elif event.code == 'ABS_Y':
            self.inputLJSY = event.state
        elif event.code == 'ABS_RX':
            self.inputRJSX = event.state
        elif event.code == 'ABS_RY':
            self.inputRJSY = event.state

    def pollInputs(self):
        # World X
        self.inputLJSYNormed = self.filterInput(-self.inputLJSY)
        self.target[0], self.speed[0] = self.updateMotion(self.inputLJSYNormed, self.target[0], self.speed[0])
        # World Y
        self.inputLJSXNormed = self.filterInput(-self.inputLJSX)
        self.target[1], self.speed[1] = self.updateMotion(self.inputLJSXNormed, self.target[1], self.speed[1])
        # World Z
        self.inputRJSYNormed = self.filterInput(-self.inputRJSY)
        self.target[2], self.speed[2] = self.updateMotion(self.inputRJSYNormed, self.target[2], self.speed[2])
        with self.cond:
            if not self.paused:
                self.master.after(int(self.dt*1000), self.pollInputs)

    def pollIK(self):
        global target
        target = self.target[:]
        runIK(target)
        with self.cond:
            if not self.paused:
                self.master.after(int(self.dt*1000), self.pollIK)

    def pollSerial(self):
        if 'ser' in globals():
            global ser
            global angles
            writeStr = ""
            for i in range(len(angles)):
                x = int( rescale(angles[i], -180.0, 180.0, 0, 1023) )
                writeStr += str(i+1) + "," + str(x)
                if i < (len(angles) - 1):                     writeStr += ","                 else:                     writeStr += "\n"             ser.write(writeStr)         with self.cond:             if not self.paused:                 self.master.after(int(5*self.dt*1000), self.pollSerial)  # 10x slower than pollIK     def filterInput(self, i):         if (i > 3277) or (i < -3277):  # ~10%             if i > 3277:
                oldMax = 32767
            elif i < -3277:
                oldMax = 32768
            inputNormed = math.copysign(1.0, abs(i)) * rescale(i, 0, oldMax, 0, 1.0)
        else:
            inputNormed = 0
        return inputNormed

    def updateMotion(self, i, target, speed):
        c1 = 10000.0
        c2 = 10.0
        mu = 1.0
        m = 1.0
        u0 = speed
        F = c1*i - c2*u0  # Force minus linear drag
        a = F/m
        t = self.dt
        x0 = target
        # Equations of motion
        u = u0 + a*t
        x = x0 + u0*t + 0.5*a*pow(t, 2)
        # Update self
        target = x
        speed = u
        return target, speed

def rescale(old, oldMin, oldMax, newMin, newMax):
    oldRange = (oldMax - oldMin)
    newRange = (newMax - newMin)
    return (old - oldMin) * newRange / oldRange + newMin


The following Python code is the leg’s Forward Kinematics (click to expand):


def runFK(angles):
    global a
    global footOffset
    global T_1_in_W
    global T_2_in_W
    global T_3_in_W
    global T_4_in_W
    global T_5_in_W
    global T_F_in_W

    a = [0, 0, 29.05, 76.919, 72.96, 45.032]  # Link lengths "a-1"

    footOffset = 33.596

    s = [0, 0, 0, 0, 0, 0]
    c = [0, 0, 0, 0, 0, 0]
    for i in range(1,6):
        s[i] = math.sin( math.radians(angles[i-1]) )
        c[i] = math.cos( math.radians(angles[i-1]) )

    # +90 around Y
    T_0_in_W = np.matrix( [ [  0,  0,  1,  0],
                            [  0,  1,  0,  0],
                            [ -1,  0,  0,  0],
                            [  0,  0,  0,  1] ] )

    T_1_in_0 = np.matrix( [ [ c[1], -s[1],  0, a[1]],
                            [ s[1],  c[1],  0,    0],
                            [    0,     0,  1,    0],
                            [    0,     0,  0,    1] ] )

    T_2_in_1 = np.matrix( [ [ c[2], -s[2],  0, a[2]],
                            [    0,     0, -1,    0],
                            [ s[2],  c[2],  0,    0],
                            [    0,     0,  0,    1] ] )

    T_3_in_2 = np.matrix( [ [ c[3], -s[3],  0, a[3]],
                            [ s[3],  c[3],  0,    0],
                            [    0,     0,  1,    0],
                            [    0,     0,  0,    1] ] )

    T_4_in_3 = np.matrix( [ [ c[4], -s[4],  0, a[4]],
                            [ s[4],  c[4],  0,    0],
                            [    0,     0,  1,    0],
                            [    0,     0,  0,    1] ] )

    T_5_in_4 = np.matrix( [ [ c[5], -s[5],  0, a[5]],
                            [    0,     0, -1,    0],
                            [-s[5], -c[5],  1,    0],
                            [    0,     0,  0,    1] ] )

    T_F_in_5 = np.matrix( [ [  1,  0,  0,  footOffset],
                            [  0,  1,  0,  0],
                            [  0,  0,  1,  0],
                            [  0,  0,  0,  1] ] )

    T_1_in_W = T_0_in_W * T_1_in_0
    T_2_in_W = T_1_in_W * T_2_in_1
    T_3_in_W = T_2_in_W * T_3_in_2
    T_4_in_W = T_3_in_W * T_4_in_3
    T_5_in_W = T_4_in_W * T_5_in_4
    T_F_in_W = T_5_in_W * T_F_in_5


The following Python code is the leg’s Inverse Kinematics (click to expand):


def runIK(target):
    # Solve Joint 1
    num = target[1]
    den = abs(target[2]) - footOffset
    a0Rads = math.atan2(num, den)
    angles[0] = math.degrees(a0Rads)

    # Lengths projected onto z-plane
    c0 = math.cos(a0Rads)
    a2p = a[2]*c0
    a3p = a[3]*c0
    a4p = a[4]*c0
    a5p = a[5]*c0

    j4Height = abs(target[2]) - a2p - a5p - footOffset

    j2j4DistSquared = math.pow(j4Height, 2) + math.pow(target[0], 2)
    j2j4Dist = math.sqrt(j2j4DistSquared)

    # # Solve Joint 2
    num = target[0]
    den = j4Height
    psi = math.degrees( math.atan2(num, den) )
    num = pow(a3p, 2) + j2j4DistSquared - pow(a4p, 2)
    den = 2.0*a3p*j2j4Dist
    if abs(num) <= abs(den):
        phi = math.degrees( math.acos(num/den) )
        angles[1] = - (phi - psi)

    # Solve Joint 3
    num = pow(a3p, 2) + pow(a4p, 2) - j2j4DistSquared
    den = 2.0*a3p*a4p
    if abs(num) <= abs(den):
        angles[2] = 180.0 - math.degrees( math.acos(num/den) )

    # # Solve Joint 4
    num = pow(a4p, 2) + j2j4DistSquared - pow(a3p, 2)
    den = 2.0*a4p*j2j4Dist
    if abs(num) <= abs(den):
        omega = math.degrees( math.acos(num/den) )
        angles[3] = - (psi + omega)

    # Solve Joint 5
    angles[4] = - angles[0]

    runFK(angles)


The following Arduino code is the serial read code for the Arbotix-M (click to expand):


#include <ax12.h>

int numOfJoints = 5;

void setup()
{
  Serial.begin(38400);
}

void loop()
{
}

void serialEvent() {
  int id[numOfJoints];
  int pos[numOfJoints];
  while (Serial.available())
  {
    for (int i = 0; i < numOfJoints; ++i)
    {
      id[i] = Serial.parseInt();
      pos[i] = Serial.parseInt();
    }
    if (Serial.read() == '\n')
    {
      for (int i = 0; i < numOfJoints; ++i)       {         if ( (id[i] > 0) && (0 <= pos[i]) && (pos[i] <= 1023 ) )
          SetPosition(id[i], pos[i]);
      }
    }
  }
}

Quadbot Inverse Kinematics

Following from the previous post, the Inverse Kinematics (IK) have been calculated, just in time for the test rig hardware, which is arriving this week.

I found a geometrical solution to the IK by breaking the 3D problem down into two 2D problems:

Looking at the leg from the back/front, only joints 1 & 5 determine the offset along the world Y axis. The joints are equalised so that the foot is always facing perpendicular to the ground.

Looking at the leg sideways, the rest of the joints 2, 3 & 4 are calculated using mostly acos/atan2 functions. Note that the leg link lengths need to be projected onto the z-plane, which is why the a2p, a3p etc. notation is used.

DrawingForwards

Trigonometry based on front view.

DrawingSide

Trigonometry based on side view.

The test program now has controls for positioning a target for the foot to follow, based on the IK. This was very helpful for testing and debugging. A test function was also added which simply moves the target in an elliptical motion for the foot to follow. It is a very simple way of making the leg walk, but could be used as the foundation for a simply four-legged walking gait.

Quadbot 17 Kinematics_003

Kinematics test program updated with target visual and sliders.

testIKEllipse

Test of IK using elliptical motion target which the foot (F) follows.

The next tasks are to build the physical test rig, learn how to use the Arbotix controller and port the IK code to Arduino.


The kinematics are written in Python/TKInter, and code can be found on GitHub.

The geometrical drawings were made using the browser-based GeoGebra app.

Quadbot Forward Kinematics

The Forward Kinematics for the left leg of the Quadbot have been formalised, using modified Denavit-Hartenberg parameters and axes conventions.

I also made a simple Python applet to verify the maths and visualise the leg’s poses. I used Tkinter and three Canvas widgets to show orthogonal views.

The reason I am testing the maths in a quick Python program is that I want to be able to port them easily over to Arduino, as my latest aim is to drop the Raspberry Pi and A-Star 32U4 LV Pi expansion module (shown in some of the latest CAD models) in favour of trying out an ArbotiX controller. A benefit with the latter is that I wouldn’t need a Dynamixel-to-USB converter (e.g. USB2AX) or separate motor power supply.

Next up will be to work out the Inverse Kinematics.

  Link
Twist
Link
Length
Link
Offset
Joint
Angle
j alpha_i-1 a_i-1 d_i theta_i
1 0 0 0 th_1
2 pi/2 29.05 0 th_2 – 34
3 0 76.919 0 th_3 + 67.5
4 0 72.96 0 th_4
5 -pi/2 45.032 0 th_5

D-H Parameters

Quadbot 17 Kinematics_001

Quadbot kinematics applet, zeroed position

Quadbot 17 Kinematics_002

Quadbot kinematics applet, test position using sliders