~ PROJECTS ~

Autonomous Coatings Removal System (ACRS)

Many ground and air vehicle systems require coatings removal periodically. Depending on the chemistry, this can be a very time consuming and even hazardous removal process for humans to endure. I have worked to alleviate this burden by helping the National Robotics Engineering Center with removal of coatings from aircraft autonomously as a part of the ACRS project. Because of the high tolerance of the laser delivery system used during coatings removal, accurate placement of the end effector relative to the aircraft is critical. Often, accurate modeling of the aircraft is not available. We have developed a system of scanning the aircraft and autonomously building a model that can be used for localization and planning. Once the underlying model is constructed, we can use it for highly accurate localization of the delivery system relative to the area of the aircraft about to be processed.

Underwater Mapping and Localization

There are many structures in the ocean that are subject to repetitious waves and currents. Inspection to ensure these structures have not fractured is imperative especially when the structures are oil rigs and platforms. The autonomous inspection of these oil rigs can reduce cost and human danger while also ensuring continuous safe operation. During part of my stint at Lockheed Martin I helped design localization and mapping algorithms for oil rig inspection. Using a 2D sonar collection, returns were processed to infer larger structures localizing to minimally available prior models of the rigs. This technique enabled us to not only localize, but also inspect for small defects, as-built differences, and other general change detections from ocean wash and debris.

Video Aided Navigation

Computer vision is an attempt to use cameras and computer systems together to enhance understanding of the immediate surroundings. Recently I have had the opportunity to apply a visual odometry approach to both ground and air platforms carrying camera systems in order to provide this additional environmental understanding. We use a feature point tracker to track points in both color imagery as well as IR imagery. Using these point correspondences we are able to automatically fit motion models to the cameras which agree with the movements of the points in the image. These calculated motion models are able to extract the pose (position and orientation) of the camera system relative to the surrounding areas and we can track that pose over time. This odometry information is then fed into navigation software which can perform various path planning tasks which provides the robotic vehicle with an understanding of its surroundings and position in the world.

3D from Video

One thing missing in the description above is any type of obstacle identification or avoidance. Using the poses found in the manner described above, 3D models of the scenes using only a video camera can be generated. The work here focused on indoor applications but is generally applicable to anything you can point a camera at. We take a video feed from a camera at arbitrary locations looking at a room, or objects in that room and generate a 3D model of that room and/or objects. This project is particularly challenging since most existing systems require the user to designate corresponding points between images which we would like to automate through feature tracking methods and video similar to that described above. It is also challenging because some systems which use stereo to perform the 3D point location calculation cannot be used since we do not have a calibrated baseline stereo camera rig, but instead we are using a common digital video camera. The approach is to track features while computing the relative motion of the camera for all the images. Once poses are known for a set of frames in the video we can use a variable temporal baseline approach to perform dense stereo reconstruction and acquire the 3D model of the scene.

Augmented Reality

Augmented reality is the application of real time computer graphics to provide information to a user about an environment. For example, a user who may be performing maintenance on a manufacturing machine may benefit from detailed schematics or procedural overlays which would direct the maintenance operation. Another application is the use of informative overlay information for remote robotics operation.

I have done some work in augmented reality applications using the OpenCV toolkit from Intel along with OpenGL and the ARToolkit from the University of Washington as well as ARtag from NRC Institute for Information Technology to develop a head mounted display for a user to perform simple maintenance applications. I have given a presentation on augmented reality which consists mostly of work which is not mine, but may be of some interest as a general overview of the field. Most of the information I have in the presentation came from, but is not limited to the following list.


Computed Tomography from Video Fluoroscopy

My master's thesis work was in the computed tomography (CT) field, where we developed a method of performing a CT scan using data generated from a fluoroscopy machine which did not have controlled motion or external sensors to determine the motion. This is of particular impact to the medical field since CT requires knowledge of the data generation path for accurate 3D volumetric reconstruction. Until this point, this information could only be obtained using external sensors and/or precisely controlled motion.

Our method combines the computer vision technique using projective geometry to determine the pose of the camera with the cone beam reconstruction method described by Grangeat for exact cone beam reconstruction to produce satisfactory reconstruction results. We determine the pose for each frame in the video sequence in an on-line sense using the image data, and then perform the reconstruction of the data from an arbitrary path.

If you would like a soft copy of my thesis, please send an email with a short note about your interest in the field and I will be happy to send you a copy.

GK12 Outreach - Adventure Engineering

As part of an outreach program at the Colorado School of Mines, I have done some work with 6th, 7th, and 8th graders in the Denver Public School System to help provide a better understanding about what engineering is and how it is applied to everyday life. One of our goals is to show the students at a young age that it is important for them to perform well in their math and science classes and help them realize that the application of math and science (or engineering) is a fun and exciting career that they may like to pursue.

As a part of this project we develop 1-2 week long units which we bring into the classrooms and help the teachers implement with their students. These units are designed to excite the students through adventure based scenarios which they must solve using engineering, math and science skills. They are also aligned with the Colorado State Education Standards for seamless interaction with the teacher's regular curricula. We are also working on gathering helpful data on the student's progress through our program so that we can be more effective in the future, as well as publish our results so that other similar outreach programs will benefit from what we have done.

Within the project we have joined with Smiley Middle School in Denver County to enter one hydrogen fuel cell and one solar car in 2004 and one hydrogen fuel cell and two solar cars in 2005 in the Junior Solar Sprint put on by NREL.