Depth sensor

I have finally made a decision on the sensor and ordered the Intel RealSense D435 Depth Camera! I also found high-quality CAD models of the whole D400 series here: Production CAD Files for Intel RealSense D400 Series.

After importing into Fusion 360 and adding a touch of texturing, I added it to the robot model to show a sense of scale. It needs a 3D printed mount to be properly attached to the top of the front body; or better yet, a re-design of the front bumper to integrate the camera inside it, giving an unobstructed forward-facing view.

The camera will be controlled by PC/laptop for now, but will later need to be controlled by a beefier on-board controller.

Sensor mount

The mount will be made with some standard Bioloid parts and 2 additional AX-12 servo motors. Here is the current concept:

Forward planning

Path generation and following

The current walking gait system is mostly just a test of the kinematics and user inputs. To make the robot truly manoeuvrable, it requires sensors and better foot positioning techniques. The following diagrams are my initial attempt at trying to decide how exactly to achieve this. This is a very first look, and the actual implementation will probably vary significantly, but it provides a good starting point to start exploring further ideas:

The problem is broken down into three levels:

  • Level 1: Navigation
    • Scanning of area with 3D sensor
    • Making decisions on how to approach a goal
  • Level 2: Stepping
    • Deciding on appropriate gait parameters
    • Generating the foot target positions within the environment
  • Level 3: 
    • Planning and executing the foot trajectories
    • Reacting to any obstacles encountered by the feet

Required hardware updates

These are the current hardware updates that will be needed soon, in order to improve and then progress further than basic walking gaits:

1) Torsion springs

Reducing the load on critical joints will be achieved by adding torsion springs to the hip/knee motors.

2) Foot sensors

This is a must in order for the robot feet to account for rough or uneven terrain, and load distribution of the robot. I’ve been looking at ways of adding foot sensors via the AX-12 daisy-chain and not separately, to avoid adding multiple cable bundles to each foot. The only off-the-shelf existing option I have found is too expensive (ROBOTIS OP2-FSR Set). However, a perfect open-source solution looks like the Force foot design by Rhoban (thanks to B[] for the suggestion!).

3) 3D sensor

The ability to get useful information from the environment can be achieved by adding a 3D sensor head to the robot, such as an Intel RealSense depth camera. This however will have significant knock-on effects on the rest of the design, such as on power consumption, and the need to have a powerful on-board processing, which the Raspberry Pi won’t be able to hack.


More future ideas

Better control

Replacement of the motors would only be an option in an entirely new hardware iteration of this robot, as the design is based around the AX-12s. However, I may explore upgrades to the motor control, beyond AX-12s’ internal controllers with compliance. Two potential options:

  • External PID loop wrapped around the existing internal controller
  • Firmware updates such as this one, in order to have direct PID control

Modelling

I would like to better understanding of the dynamics of leg locomotion, and developing a Simulink model would help. Something like this but on four legs would really be the ultimate goal!

Heads or Tails?

 

I have recently been prioritising update logs on the Hackaday page of this project, due to the previous deadline! For completeness, here are some more details about the process in building the add-on tail.

 


 

The CAD model went through a couple of iterations before deciding on the final form: from 8 polygonal sections to six smoother and smaller sections.

When 3D printing, the base section went through a re-design, since the original idea of just gluing the small section to the base was clearly not going to provide enough stability. I also removed yet another link, to avoid having the tail ending up too big and heavy.

The base section of the tail now screws onto two of the existing holes on the rear base, meaning there is no need to modify it, other than replacing two M3 bolts with longer ones.

The tail links are all held together with a 150 mm long, 5 mm diameter spring, scavenged from a flexible long reach pick-up tool.

 


To make the models easy to 3D print, I sliced them all down the centre, then glued the halves together Loctite super glue works well with PLA (the gel type works best). There were a few failed prints in the process, which I put down to rushing, and using PLA which has been out in the air gathering moisture for several months! Prints were made on a Flashforge Creator Pro, with 15% infill.

Here are some pictures of the printing and assembly progress, leading to the final result:

 

 


 

MVI_3639

 

 

Hackaday win!

Some good news for the quadruped project, as I entered it into the Human Computer Interface Challenge, part of the larger 2018 Hackaday Prize contest, and it was one of the 20 winners!

Here are the links to the announcement and highlight articles:

 


 

MVI_3640

 

Walking gait smoothing and tuning

I added some exponential smoothing to the original walking gaits, to smooth the edges of the trajectories and create a more natural movement.

I then added a ±30° pitching motion to what best approximates the ‘ankle’ (4th joint), to emulate the heel off and heel strike phases of the gait cycle.

b9780323039895000213_f014-019-9780323039895

The range of motion of the right ankle joint. Source: Clinical Gate.


I realised however that applying the pitch to the foot target is not exactly the same as applying the pitch to the ankle joint. This is because, in the robot’s case, the ‘foot target’ is located at the centre of the lowest part of the foot, that comes into contact with the ground, whereas the ankle is higher up, at a distance of two link lengths (in terms of the kinematics, that’s a4+a5). The walking gait thus does not ends up producing the expected result (this is best explained by the animations are the end).

To account for this, I simply had to adjust the forward/backward and up/down position of the target, in response to the required pitch.

With some simple trigonometry, the fwd/back motion is adjusted by -A*sin(pitch), while the up/down motion is adjusted by +A*(1-cos(pitch)), where A is the distance a4+a5 noted previously.

Here are the details showing how the targets are adjusted for one particular phase in the creep walk gait (numbers involving translations are all normalised because of the way the gait code currently works):

Pitch Adjustments Graph

Creep walk gait, 25-step window where the foot target pitch oscillates between ±30°. Original target translations (fwd/back, up/down) are shown compared to the ones adjusted in response to the pitch.


The final results of the foot pitching, with and without smoothing, are shown below:

This slideshow requires JavaScript.

This slideshow requires JavaScript.

This slideshow requires JavaScript.

 

Steering input adjustments

Following on from the previous post on walking and steering, I realised that when moving the spine joints, the rear feet remain anchored to the ground, when it would be better if they rotated around the spine motors, to give a better turning circle for steering.

The reason behind why the feet remain fixed is because their targets are being defined in world coordinate space, so moving the spine won’t change the target.

There are advantages to defining the targets in world space for future work, when the robot knows more information about its environment. For example the legs can be positioned in the world in order to navigate upcoming terrain or obstacles. But for now, it is often useful to work in coordinates local to the base (front base for front legs, and rear base for rear legs), since in this way you don’t have to worry about the relative positioning of the front base w.r.t. rear.

I will eventually update the kinematics code so either world or local targets can be selected.

For now however, I have made an update to the code, so if the spine joint sliders, gaits or walking/steering inputs are used, the rear leg targets move with the spine. To explain this better visually:

This slideshow requires JavaScript.

Another minor adjustment you might notice was the widening of the stance, to provide a larger support polygon. The walking gaits still need fine-tuning, as walking on the actual robot is still unstable and slow.

 

Walking and steering inputs

Some slow progress lately, but progress nonetheless. First, I have updated the walking gaits with additional target values. Second, I have added a new input mode which allows the predefined walking gaits to be scrolled through via the keyboard or controller inputs. In effect, this means the controller can be used to “remote-control” the walking of the robot! The walking gaits still need a lot of tuning, but the basic function is now implemented.

I have updated the CSV spreadsheet for gait data, so that it now includes the 5 possible degrees-of-freedom of each foot (XYZ and Roll/Pitch), the 6 DoF of the base, and the 2 spine joints.

Gait_Values_Walk

The walking gait’s updated list of foot target values (first 50 out of 100).

Gait_Graph_Walk

The foot target values visualised (base and spine joints not shown).


In Python, all the CSV data is loaded into an array. One of the keyboard/controller inputs can now also be used to update an index, that scrolls forwards/backwards through the array’s rows.

Next, to get the robot to turn, a second input controls a deflection value which adjusts one of the spine joints and the base orientation (as was mentioned in this past post). The deflection slowly decreases back to 0, if the input is also 0.

By doing this, the walking gait can be controlled at will by the two inputs, and hopefully make the robot walk and turn. Next comes the fine-tuning and testing!

All the latest code can be found on the Quadbot17 GitHub project page as usual.

 

Body Moving

New video!

I have been testing the movement of the robot’s base in the world, while keeping the legs fixed to the ground, as a test of the robot’s stability and flexibility.

The robot base can now be controlled, either via the GUI, keyboard or gamepad, in the following ways:

  • Translation in XYZ
  • Roll/pitch/yaw
  • Movement of the two spine joints – Front of robot remains still, while rear adjusts
  • Movement of the two spine joints – Front of robot attempts to counteract the motion of the rear

You may notice the real robot can’t move its upper leg all the way horizontally as the IK might suggest is possible, because there is a small clash between the AX-12 and the metal bracket, but this should be fixed by filing or bending the curved metal tabs:

Leg Clash Check - CAD-Actual.png


Software updates

I have recently written an OpenCM sketch to control the robot servos, in a way similar to how it was being done with the older Arbotix-M, but this time using the Robotis libraries for communicating with the motors.

I have also been making various updates to the Python test code, with a few of the main issues being:

  • Improved the code for positioning the base and base target in world
  • Updated base/spine transforms – Front legs now move with base, not first spine joint
  • Fixed the leg IK – Legs now remain in line with ground when the base moves
  • Added new keyboard/joystick input modes for controlling base position, base orientation, spine joints
  • Updated the serial string sending function and fixed some minor issues
  • Moved a load of script configuration variables to a separate Params module
  • Added a combo box to the GUI front-end for loading a selection of CSV files (as an update to the previous two fixed buttons)

All the latest code can be found on the Quadbot17 GitHub project page as usual.

Further software and hardware updates

Over the last few weeks I have been thinking about the overall controller architecture, 3D printing some test Raspberry Pi cases, and updating the Python test software.

Plotter updates

Timed operations

After refactoring the giant single-file script from the previous post, I noticed the script was actually running very inefficiently, partly down to the fact that the 2D canvas had to redraw the whole robot on every frame, but also because of the way some of the threaded classes (e.g. input handler, serial handler) were inadvertently constantly interrupting the main thread, because they were relying on use of the root window’s after() method. The timing loops are cleaned up now and the code runs much more efficiently.

Here is an example I figured out, of how to create a class to run a function in a separate thread, with a defined time interval, that is pausable/stoppable. It subclasses the Thread class, and uses the help of an Event object to handle the timing functionality. After creating an object of this class, you call its start() method, which invokes the run() method in a separate thread. The pause(), resume() and stop() methods perform the respective actions. The example is a stripped-down version of an input handler, which applies the equations of motion to user inputs, as shown in a past post.

import threading
from time import time
import math
class InputHandlerExample(threading.Thread):
    def __init__(self):
        # Threading/timing vars
        threading.Thread.__init__(self)
        self.event = threading.Event()
        self.dt = 0.05  # 50 ms
        self.paused = False
        self.prevT = time()
        self.currT = time()
        self.target = 0.0
        self.speed = 0.0

        # Input vars
        self.inputNormalised = 0.0

    def run(self):
        while not self.event.isSet():
            if not self.paused:
                self.pollInputs()
            self.event.wait(self.dt)

    def pause(self):
        self.paused = True

    def resume(self):
        self.paused = False

    def stop(self):
        self.event.set()

    def pollInputs(self):
        # Upate current time
        self.currT = time()

        # Get user input somehow here, normalise value
        #self.inputNormalised = getInput()

        # Update target and speed
        self.target, self.speed = self.updateMotion(
            self.inputNormalised, self.target, self.speed)

        # Make use of the returned target and speed values
        #useResults(target, speed)

        # Update previous time
        self.prevT = self.currT

    def updateMotion(self, i, target, speed):
        m = 1.0
        u0 = speed

        # Force minus linear drag
        inputForceMax = 1000
        dragForceCoef = 5
        F = inputForceMax*i - dragForceCoef*u0
        a = F/m
        t = self.currT - self.prevT

        # Zero t if it's too large
        if t > 0.5:
            t = 0.0
        x0 = target

        # Equations of motion
        u = u0 + a*t
        x = x0 + u0*t + 0.5*a*math.pow(t, 2)

        return x, u

3d plot

So far the 2D representation of the robot worked OK, but it has its limitations. So I finally decided to put some effort into updating the representation of the robot. I went with matplotlib and its 3D extensions with the mplot3d toolkit.

The joints are simply represented using the scatter() function, while the lines are simply drawn with plot(). The text() function overlays the number of the joints over the joint positions. The leg target widgets use a combination of the above functions. The 3D plot is updated using the FuncAnimation class.

Since this designed for Matlab-style plotting rather than a simulator, the performance is still somewhat lacking (not as fast as the 2D canvas), but I wanted to be able to get something working fairly quickly and not waste too much time with something like OpenGL, since my aim is to have a test program and not build a 3D simulator from scratch. There’s always ROS and Gazebo if I need to take this further!

As a side note, I have added the option in the main program to be able to select between 2D or 3D drawing mode, as well as no drawing, which should be useful for speeding up execution when running on the Raspberry Pi in headless mode (no screen).

Here is a screenshot and a video of the results:

Quadbot 17 3D Drawing 002


Canvas redrawing

Another efficiency issue was the fact that previously the canvas was cleared an then redrawn from scratch on every frame update. To improve efficiency, both for the 2D and the 3D cases, the drawing classes now have structs to keep track of all element handles, and the redraw functions move the elements to their updated positions. For the 2D case (TkInter canvas), this is done with the coords() method. For the 3D case (matplotlib) it was a bit trickier, but I managed it with a combination of methods:

  • For lines (plot): set_data() and set_3d_properties()
  • For points (scatter): ._offsets3d = juggle_axes()
  • For text overlay (text): set_position and set_3d_properties()

Updated class diagram

The interaction between the different elements is cleaned up and now looks like this:

QB17QuadKinematics Class Diagram 02


Hardware architecture

Now, to make a switch and talk a bit more about the hardware, here is the current plan of how all the components will probably look like:

Hardware Components Diagram 01

I have in fact got two Raspberry Pis now, a Pi 3 Model B as well as a Pi Zero W. Both now have built-in WiFi, but I am going with the larger Pi 3 as the current choice for main controller, favouring the quad-core CPU over the Zero’s single core.

I tried 3D printing some cases from Thingiverse for each Pi, and here are the results. A great plus point of the Pi 3 case is that it has mounting points which will be easy to locate on the main robot body.

You can find my builds and their sources on Thingiverse:

Software updates

Here is a quick update on current progress with the software:

  • The robot spine can now be fully translated and rotated w.r.t. the world frame. Although I mentioned in the past that this isn’t really needed until the issue of localisation is faced, it is actually useful as a testing method for the IK, as all the legs can are moving w.r.t. the base – imagine e.g. the robot moving from a crouching to standing position.
  • The spine RPY orientation in space and its two joints are temporarily set up to influence each other, such that they emulate how the robot would behave if we wanted the spine/body orientation to change while keeping the feet flat – assuming the robot is standing on a perfectly flat surface. This should all make sense once I test the body orientation kinematics on the real robot soon!
  • As the controller I have been using does not work wirelessly, it’s not convenient to always have plugged in, so I added the ability for the test program to read the keyboard input as an alternative for moving the foot target positions. The Python module used is pynput. On this issue of the controller, if anyone knows how to get an XBox One controller’s wireless adaptor to work in Linux and Python, please let me know!
  • The monolithic Python test script was becoming a bit out of hand in terms of size and use of globals, so I’ve broken up into a few separate files and classes with better encapsulation than originally. It’s not perfect or optimised, but a bit more manageable now.

Quadbot 17 Spine XYZ and KB Input

Latest test program, with keyboard input and full spine control.

QB17QuadKinematics01

Simple class diagram of the Python program, after refactoring the original single-file script.


On the hardware side, I have some new controllers to explore various ideas, namely a Robotis OpenCM9.04 and a Raspberry Pi 3. The Raspberry could be mounted to the robot with a 3D-printed case like this.

Magnetic Encoders VS Optical Encoders

Enrique del Sol - Robotics, Mechatronics & Control Research

Encoders, whether rotary or linear, absolute or incremental, typically use one of two measuring principles—optical or magnetic. While optical encoders were, in the past, the primary choice for high resolution applications, improvements in magnetic encoder technology now allow them to achieve resolutions down to one micron, competing with optical technology in many applications. Magnetic technology is also, in many ways, more robust than optical technology, making magnetic encoders a popular choice in industrial environments.

Parameter Optical Sensor Characteristics Magnetic Hall Sensor Characteristics
Principle  coded disc/scale, through beam arrangement  magnet/tape/polewheel opposed to sensor
Incremental accuracy of target  100 nm – 1 μm

(lithography process)

 5 μm – 30 μm

(magnetisation process)

Energising by external LED (20 mW)  by target (Br>220 mT)
Signal Frequency  > 1 MHz possible  < 50 kHz
Benefits  high code density, high code accuracy  robust
Disadvantages  sensitive to contamination, high alignment requirements  raw code density, medium code…

View original post 513 more words

Chassis assembled

After 3D printing a few more plastic parts and cutting all the aluminium plates, the custom chassis is finally complete! Below are some quick notes on the progress over the last weeks.

More 3D printed parts and painting

I printed off some of the remaining parts of the chassis. The battery compartment was best printed upright with minimal support structure needed. The rear bumper was trickier than the front bumper, because of the additional hole space for the battery, so I found it was best to print upright, with a raft and curved supports at the bottom.

Once all parts were printed, and some more sanding, I spray-painted all the parts with plastic primer, then blue paint, and finally clear sealer.


Metal parts

I was initially thinking of finding an online service to cut out the aluminium chassis parts, but then decided it would be faster and cheaper to just get some sheets 1.5 mm thick aluminium sheets from eBay and cut them on a jigsaw table.
I used Fusion 360’s drawing tool to export to PDF the parts I needed to cut out: four chassis plates and four foot plates. I then printed them in actual scale and glued them on the aluminium to uses as traces.


Assembly

I threaded the holes on all the 3D parts, which were either 3 mm wide where the aluminium plates attach, or 2 mm at the leg and spine bracket attachment points.
Using a tap for the 3 mm holes worked pretty well, but the 2 mm holes were more prone to being stripped or too loose, so manually threading the holes with the bolts worked better. Another issue was the infill surrounding the internal thread cylinder sometimes being a bit too thin. In retrospect, I’d try designing the 3D parts to use heat-set or expandable inserts, especially for the smaller threads.

The servo brackets attaching to the chassis have a large number of holes (16 for each leg and one of the spine brackets, and 12 for the other spine bracket) so the screws so far seem to secure the brackets well enough. The spine section is under a lot of stress from the wight of the whole chassis and legs, so it will not be able to withstand much twisting force, and the servos may not be strong enough at this area, but I will have to test this in practice with new walking gaits.


Conclusions

The custom chassis has finally made it from a 3D design to a reality, with relative success so far. Some of the threaded holes on the 3D parts are not as strong as I’d like, the AX-12 may be under-powered for the spine connection, and the brackets anchoring the spine may be the first to give way due to twisting forces. Also the chassis as a whole would benefit form weight-saving exercise and perhaps being thinned down. But this has only been the first iteration of the main chassis, and the robot design has now become a reality and seems to stand up well. The next step will see how it performs some of the basic walking gaits!

 

From paper to plastic

… or more correctly, from CAD to reality, as it is time for 3D printing!

I’ve recently got a new 3D printer in the form of a FlashForge Creator Pro 2017, which means I can start printing some of the structural components for the quadruped now, leaving the decorative pieces for later. In fact, some of them have already been printed, as you can see in the images below.


Chassis parts

All the parts were recently updated from their previous iteration slightly, by adding fillets around the edges, and decreasing nut hole diameters by 0.2 mm in order to provide some material for self-tapping threads. On the other hand, I increased the tolerance of some slots by the same amount, to allow a tolerance for their connection to interlocking plastic tabs.

The rear section has also been modified: the underside aluminium base will have a tab at 90° that connects to the rear, to provide more rigidity to the central connection with the spine servo bracket.

Here are the CAD models of the chassis parts:

Foot base

Front body assembly

Rear body assembly

 


Printing

All parts were printed in PLA plastic.

The first part I started with was the foot base. I printed it with a 20% honeycomb infill. I didn’t add any intermediate solid layers, but might do so in other parts. I have so far printed two out of the four bases.

Each leg will connect to a leg base bracket, which is the same design for all legs. The part was printed “upside-down” because of the orientation of the interlocking tabs. This meant that some support structure was needed for the holes. For the first print attempt I also added supports around the overhang of the filleted edge, along with a brim, but for the subsequent prints I didn’t bother with these, as the fillet overhang held fine without supports, and saved from extra filing/sanding down. These parts also used 20% infill.

For the front and rear “bumpers”, I reduced the infill to 10%.

For the larger part comprising of the central section of the front, the spine front bracket, I also used an infill of 10%. Due to the more complicated design that would have included many overhangs, I found it easier to cut the part lengthwise and print it as two separate pieces. These will be super-glued together after sanding.

Time-lapse GIFs and images of the printing process:

 


The parts so far

In terms of printing times, the foot bases and leg base brackets took about 3 hours each, the bumpers took around 4 hours each, and the two spine front bracket halves took about 7 hours combined, so total printing time is going to be fairly large!

The 0.2 mm clearance seems to work fine for self-threading the plastic with M2 size metal nuts, but was too large for some of the plastic-to-plastic interlocking tabs, possibly since this tolerance is close to the resolution limits of the printer (theoretically a 0.4 mm nozzle and 0.18 mm layer height). However after some filing and sanding down, all the plastic parts fit together nicely.

The resulting 3D prints before and after sanding:


The assembly so far

Finally, here are some images of how the chassis assembly is shaping up, as well as the foot bases shown attached to the foot metal brackets. These fitted snug without any sanding, and all the holes aligned perfectly with the metal brackets, which was reassuring!


The next step is to glue the front bracket halves together, and maybe spray paint all the parts, as they lose all their original shine and end up looking very scratched after sanding.

 

Four legs built – New videos and images

With the hardware for all four legs gathered, I have assembled the first standalone version of the quadruped. The MakerBeam XL aluminium profiles were adopted as before, to create a temporary chassis.

The fact that the robot can now stand on its four feet meant I could quickly give the walking gaits a test on the real hardware: The Python test software reads the up/down and forward/back position of each leg for a number of frames that make up a walking gait, the IK is solved, and the resulting joint values are streamed via serial over to the Arbotix-M, which simply updates the servo goal positions. No balancing or tweaking has been done whatsoever yet, which is why in the video I have to hold on to the back of the robot to prevent it from tipping backwards.

I took some time to make a video of the progress so far on this robot project:


A chance for some new photos:

 


Finally, here is an older video showing the Xbox One controller controlling the IK’s foot target position, and a simple serial program on the ArbotiX-M which receives the resulting position commands for each motor (try to overlook the bad video quality):


In the next stage I will start building the robot’s body, as per the CAD design, which is for the most part 3D printed parts and aluminium sheets, combined with the 2 DoF “spine” joints.

Body/spine and improved leg kinematics

The Python test program has been updated to include the additional spine joints, transformation between the robot and world coordinates, and leg targets which take orientation into account.

This test script is used in anticipation for controlling the actual robot’s servos.

 


Spine joints

The “spine” consists of two joints which will allow the front of the robot to pitch and yaw independently of the rear. This will give it more flexibility when turning and handling uneven terrain, as well as other tasks such as aiming its sensors at the world.

Since the spine joints are quite simple, I don’t think there is any need to create IK for this section.

 

Main Assembly v114 Spine

The “spine” joints separate the body of the quadruped between mostly similar front and rear sections.

 


Body and spine orientation

The robot body now takes into account that it has to be oriented w.r.t. the “world”. This will be physically achieved by the information acquired by an IMU sensor. If the robot is tilted forwards, the targets for the legs will have to be adjusted so that the robot maintains its balance.

I have defined the kinematics in a way that if the the robot was to rotate w.r.t. the world, the whole body rotates (this can be achieved by moving the test Roll/Pitch/Yaw sliders). However if the servo joints of the spine are moved (test joint 1 / joint 2 sliders) the rear section of the robot will move w.r.t the world, and the rear legs will move with it, while the front section won’t change w.r.t. the world.

In order to achieve this, the leg IK had to be updated so that now the base frames of the front legs are linked to the front section of the robot, and the base frames of the rear legs are linked to the rear section.

You might notice while orientation will be defined by an IMU, pure translation (movement in XYZ) in the world is not considered for now, as it is meaningless without some sort of localisation capability in place. This could be achieved by a sensor (see below), but is an entirely separate challenge for a long way down the line (hint: SLAM).

Quadbot 17 Quad Kinematics_004

New leg targets: Foot roll/pitch can be attained (within limits). In addition, the robot base can be positioned with respect to an outside world frame.

Quadbot 17 Quad Kinematics_002

Original leg targets: The feet are always pointing vertical to the ground.

 


Target roll and yaw

Initially, the leg target was simply a position in 3D for the foot link, and the foot was always pointing perpendicular to the ground, which made the inverse kinematics fairly easy. In version 2, the target orientation is now also taken into account. Actually, the pitch and roll can be targeted, but yawing cannot be obtained, simply because of the mechanics of the legs. Yawing, or turning, can be done by changing the walking gait pattern alone,  but the idea is that the spine bend will also aid in steering the robot (how exactly I don’t know yet, but that will come later!).

Getting the kinematics to work were a bit trickier than the original version, mainly because the “pitching” orientation of the leg can only be achieved by the positioning of joint 4, whereas the “rolling” orientation can only be achieved by the positioning of joint 5. The available workspace of the foot is also somewhat limited, in part due to that missing yaw capability. Particularly at positions when the leg has to stretch sideways (laterally) then certain roll/pitching combinations are impossible to reach. Nevertheless, this implementation gives the feet enough freedom to be placed on fairly uneven surfaces, and not be constrained to the previously flat plane.

The next challenge that follows from this, is how can realistic target positions and orientations be generated (beyond predetermined fixed walking gaits), to match what the robot sees of the world?

To answer this, first I need to decide how the robot sees the world: Primarily it will be by means of some 3D scanner, such as the ones I’ve looked into in the past, or maybe the Intel RealSense ZR300 which has recently caught my attention. But this alone might not be sufficient, and some form of contact sensors on the feet may be required.

The big question is, should I get a RealSense sensor for this robot ??? 🙂

 


 

Updated code can be found on GitHub (single-file test script is starting to get long, might be time to split up into class files!).

 

 

Second leg assembly and painting

With enough motors and brackets to build a second leg, the hardware build continues! I have spray-painted all the metal brackets to go with an all-blue colour scheme. The Robotis plastic brackets were hard to source online, so I got them printed by Shapeways.

I re-purposed the test rig frame used for the single leg to make a platform for the two legs. It’s made out of MakerBeam XL aluminium profiles which are very easy to change around and customise to any shape. This base will work well until I get the rest of the plastic parts 3D printed and the metal parts cut.

I also had enough parts and motors to assemble the 2-axis “spine”, but the main frame is not yet built so it that part is on the side for now.

Here are a few photos of the build:

In the next post I will concentrate on software updates to the leg and spine kinematics.

Kinematics x4

I have updated the quadruped kinematics program to display all four legs and calculate their IK. I have also started working on the ways various walking gates can be loaded and executed. I found an interesting creeping walk gait from this useful quadruped robot gait study article, and replicated it below for use with my robot:


Creep_Gait


As you can see, the patterns are replicated in quadrants, in order to complete a full gait where each leg is moved forward once. In my test program, I use the up/down and forward/back position of each leg, to drive the foot target for each leg, as was done previously with the GUI sliders and gamepad.

The lateral swing of each leg (first joint) is not changed, but this can be looked at later. The “ankle” (last two joints) is controlled such that the leg plane is always parallel to the ground.

This is what the current state of the program looks like:


Quadbot 17 Quad Kinematics_002


The foot target values are loaded from a CSV into the Python program, and the IK is run for each leg, going through the whole gait sequence:


testQuadGait


I have also added the option to interrupt and run another gait sequence at any point. The reason for this is to try and experiment how to best switch or blend from one gait to another. For now, if a new gait is loaded, the program will stop the current gait, and compare the current pose to all poses in the new gait. The new gait will then start at the pose which most closely matches the current pose, by using a simple distance metric.
If the 8 values of up/down/fwd/back for all legs are in an array LastPose for the last pose the robot was in, and CurrPose for the current pose of interest from the new gait, then the pseudocode looks something like this:

DistanceMetric = 0
for i = 0 to 7
{
  d[i] = abs(CurrPose[i] - LastPose[i])
  if d[i] > threshold
  d[i] += penalty
  DistanceMetric += d[i]
}

If the distance in any particular direction is larger than some threshold, then an arbitrary penalty value can be added. This will bias the calculation against outliers: a pose with evenly distributed distances will be preferred over a pose with an overall small distance but large distance in one direction. This may not actually be of much use in practice, but can be tweaked or removed later.

The above pose switching idea will be expanded on, so that the robot can seemlessly blend between predefined walking gaits, e.g. when in order to turn left and right or speed up and down.

The next step is to start porting all this test code onto the Arbotix, which has a few minor challenges: Ideally the IK matrix operations need to be done efficiently without extra overhead from additional libraries. The gait values which are loaded from a CSV file need to be hard-coded, however this should be simple to do, since as shown above a gait uses the same target values rearranged across quadrants.


Until next time!