Assistive Robotics Lab at UCF

   Home>>Assistive Robotics>>Lab Activities

Code
  • Head Tracking
    • Status: Complete

      People:
      Ryan Lovelett

      Description:
      The goal of this project was to try and achieve a new input device for disabled users who have good head control but poor limb control (e.g., C3 Spinal Cord Injury) . The head tracking device utilizes infrared sensing for motion detection. There are two modes that can be used to position the mouse on screen:

      - Point Mode which requires only one reflector to be worn. Here, clicking is achieved through a dwell function.
      - Vector Mode which requires three reflectors. This mode allows the user to yaw and pitch their head to move cursor laterally and up/down, clicking is achieved by a roll of the head.

      This code is written to interface with the NaturalPoint OptiTrack API, currently it only supports the TrackIR3 Pro series of cameras, and allows for a user to control the mouse with just the use of their head.

      Download:
      The source code and instructions can be downloaded by clicking here.

  • Optimized Ferns Based Tracking [1]
    • Status: In Progress

      People:
      Ryan Lovelett

      Description:
      The goal of this project was to try and achieve as close to real time as possible feature detection and tracking. We use it in our lab for the purposes of object tracking, and positioning and orientating the Manus ARM. The modified source code provided here was written initially by the Computer Vision Lab, Ecole Polytechnique Federale de Lausanne (EPFL), Switzerland. The research paper that supplements their source code is:

      [1] M. Özuysal, P. Fua, and V. Lepetit, "Fast Keypoint Recognition in Ten Lines of Code", IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, 2007.

      Significant code changes have been made to improve for our purpose their demo code that was freely distributed. Modifications that have been made include:

      - Use of OpenMP for parallelizing training and other tasks resulting in significant improvement in execution times.
      - Speed improvements by optimizing (or removing un-needed) loops.
      - Removal of memory leaks.
      - Allowing the code to compile and run on Windows.
      - All of the compiler warnings and messages have been fixed.

      This demo also has been modified to display the model and tracking image in the same frame as well as the matching points between the model and input image.

      Download:
      The source code and instructions can be downloaded by clicking here.
  • C/C++ Implementation of SRVT
    • Status: Complete

      People:
      Marcos Hernandez
      Ryan Lovelett

      Description:
      The goal of this project was to allow us to search quickly through a large database of grasping template images. These are used for object recognition as well as for pose detection. The demos provided in the downloadable software illustrates three basic functions of the classes: creation of a vocabulary tree from keyfiles, adding objects to the database, and querying objects from the database. The code is based on a paper written by David Nister and Henrik Stewenius of the Center for Visualization and Virtual Environments at Department of Computer Science, University of Kentucky. The paper can be found here.

      Download:
      The source code and instructions can be downloaded by clicking here.
Disclaimer: All code is provided for research use only. No guarantees are expressed or implied.