Squeak
  links to this page:    
View this PageEdit this PageUploads to this PageHistory of this PageTop of the SwikiRecent ChangesSearch the SwikiHelp Guide
cs4630-prj
Last updated at 1:08 pm UTC on 16 January 2006

CS4630 Cye Robot Final Project Full Writeup


Java Path and Behaviour Controller: pathfinder.tar.gz
Beware! My code has not been fully commented yet!!

Visual C++ Vision processor: Missing File (/squeak/uploads/videomunger.1.01.tar.gz)

An older project writeup is avaible at my partner Hasnain Mandviwala's page.

An older project writeup of Dean Mao's stuff is right below the Full Writeup.

Last modified: Monday, May 08, 2000

Program functionality description:


The first two screenshots show the cye pathfinder controller in action. In this footage you can see a bird's eye view of the floor taken from the camera. This picture is currently static, but there is already functionality to receive a RGB image from a socket.

The picture depicted below is the actual footage from the camera. Using the mouse, the user can block out areas where he feels there will be obstacles. The darkly shaded area on the image represent obstacles. Clicking the "Get Cye" button returns to the program the location of cye. This location can be seen with the green shading. Clicking the "Get Goal" button returns to the program the goal location of the cye. This location on the program can be seen by the red shading.
Uploaded Image: pathfinder1.jpg

When the user clicks "Go", the program uses the classic heuristic A* flooding algorithm to calculate the shortest best path from the starting position to the goal position. The path can be seen by the yellow shading in the picture. The coordinates are converted into cye coordinates and sent to the cye one by one until the cye reaches the goal position.
Uploaded Image: pathfinder2.jpg

The server which controls the cye runs on a multithreaded server so that other users can connect and control the cye. The other users could potentially be other cameras in the Aware Home. When a camera detects the cye, it would connect to the Cye Pathfinder to determine path locations. Path start and goal locations can be sent from another user entirely. Currently, these operations are done manually, but can be done automatically without any code changes.

In the footage below, we can see a real live footage of the cye. The cye is currently in its starting position. From the picture from the vision code, we can see a red and green line which finds the cye position using color blob detection. The cyan lines are boundaries for template matching. The picture from the camera can be sent across the socket to the Cye Pathfinder, but has not been implemented yet. The white squares in the picture are considered obstacles.
Uploaded Image: start.jpg

From the picture below, we can see the cye moving towards the goal position.
Uploaded Image: middle.jpg

The final picture shows the cye reaching the final goal position. Calibration is still needed so that the cye will actually touch the final destination position. Currently, we did some manual rough calibration between the cye pathfinder and the actual footage. If calibrated correctly, the cye should be able to move on top of the goal position. Right now we see that the cye only comes within a short distance of the goal position.
Uploaded Image: final.jpg



CS4630 Cye Robot Final Project Brief Writeup (old)


Although the project is far from complete, I will write a summary of the project features to look forward to. The presentation given on Wednesday gave decent pictoral renderings of the template matching algorithms to be used in the project. Since I will have nothing to do after Wednesday of this week, I will devote the majority of my time into fully completing the project.

Objective: I wish to have a fully operational cye robot capable of taking commands from the user and capable of having at least a minimal behavior of its own. It should at least know it's position and be able to go to or follow pre-defined known objects.

Research: The template matching algorithm is from a book I searched for at the library authored by Rosenfeld and Kak. I found other research papers but according to the Special Digital Effect's professor Essa, the best template matching algorithm is the cross normalizing correlation. This algorithm is described in Rosenfeld & Kak's 1976 book "Digital Picture Processing". The unfisheye algorithm is a modified version of Nelson Max's single-pass version of the Catmull-Smith two-pass image resampling. A description can be found here: http://www.acm.org/jgt/papers/Max98/. I've also written a basic background subtraction using some threshold to determine whether a pixel is considered background or not. The background subtraction should be able help limit the computational intensity of the the template matching algorithm.

The process:
The robot's position and orientation will be determined using a combination of background subtraction and template matching. This program will be written in C and will run seperately from the other modules.

Once the position and angle is known, we send the data through a TCP socket stream to a Java program to handle the logic and cye movement.

Other pre-defined known objects will be determined using a separate module that also communicates via a socket with the main Java cye-server.

Calibration of the camera will be done using a one foot by one foot red square lying on the floor. Once the camera has been calibrated, the cye should have no problems getting around with decent accuracy.

Future goals: Hopefully, the process should be completed sometime during this weekend. The following week should be spent on modifying the cye for useful purposes in the aware home. We are not yet sure how to make use of multiple cameras contexually.

We hope to have some kind of path planning and obstacle detection, but we're not sure if we can reach that point. Since the TA is working with us and he is a planning expert, he might have some insight on how to do this easily.

I would also like cye to have some kind of minimal behavior at least. Behavior makes the robot interesting to watch. Since visitors to the aware home won't know the behavior model that we implement, it could look semi-sophisticated to visitors if we make it semi responsive to user feedback. I was thinking of lightly tapping the cye when it is not moving as a way to deliver positive feedback.

Note to Chris Atkeson:
The project isn't near completion so there isn't as much as I would like to show here. But since I'm a graduating senior, I'll have nothing to do after Wednesday which means I can spend all of my time making Cye look good.... I'm hoping at least the Cye will be one good lasting contribution I can make before I go into the "real world."

Also, since this is a Swiki, you can hit that edit button at the top of the page near the squeak mouse and add some comments below if you feel want.