Robotic Testbed Work
I help maintain various robots and sensing hardware for my lab, so that all can use the robots to demonstrate their research algorithms. I’ve been especially involved with the WAM (Whole Arm Manipulator) arms in our labs, as well as with our Vicon motion capture system. I’m also very familiar with our Summit-X mobile bases (which together with the WAM’s yield the X-WAM mobile robot), as well as our Baxter robot.
Pick & Place Engine
I developed a robot-agnostic pick and place engine. It builds on the excellent foundation provided by the ROS MoveIt! project. Grasps can be specified easily on the robot to pick up certain objects, by simply dragging the robot to acceptable positions above those objects through a training process. Similarly, the robot can also be taught acceptable stacking configurations.
Here is a video of the integrated system working on our WAM arm, integrated with voice commands via Alexa:
WAM Joint Controller
I wrote a ROS MoveIt! integrated joint controller for our WAM arm. It accepts joint trajectories, publishes joint state messages, and performs other useful services. This effectively integrates our WAM with the rest of the ROS MoveIt! ecosystem. The code makes use of the libbarrett
API.
Vicon calibration scripts
I developed a series of ROS-based helper tools to help calibrate our lab’s Vicon system.
When placing reflective tags on our robots and objects, it can at times be challenging to calibrate the precise offset from the arbitrary origin of the Vicon-added object, to the real physical object’s origin. My tools and scripts help with those.
The first method involves calibrating objects. By moving the block around and performing some rotations with the object, we can calculate the origin of the object.
A second technique involves calibrating the base frame for a robot. By placing markers on a robot’s base and elbow for example, and moving the robot around through various poses, we can use kinematics to frame a non-linear optimization problem which, when solved, allows us to align the Vicon’s object’s origin with the robot’s physical base frame.
Some nice properties about the above techniques is that they are “origin agnostic”; i.e., if we re-calibrate our Vicon system and change it’s origin, everything still works!
Here is a picture of our calibrated testbed, using these two techniques:
TPN viewer
The TPN, or Temporal Planning Network, is a method for representing temporally-flexible, contingent plans. Various people in lab use them, including myself.
We represent TPNs as .xml files, which can be hard to debug. Therefore, I developed a web-based visualizer to help. It uses D3 and the dagre-d3.js projects to allow interactive inspection of the TPN.
PDDL State Viewer
Similarly to the TPN viewer above, I also developed a PDDL state viewer, which can be especially useful for debugging our integrated robotic demos. It visualizes the output of a state estimation system – by showing the facts about the world that are measured (i.e, by cameras) to be true.
In the demo below, the state viewer is running and visualizing a blocks-like domain.