Data-Driven Visual Tracking for Retinal Microsurgery

Overview

Our method operates as follows. We first use a gradient based tracker to provide an approximate estimate of the target’s new location. We then exhaustively evaluate our real-time deformable detector to predict the presence of an instrument in a reduced region of the image space, which is parametrized by tracker’s estimate from the previous step. Finally, we use spatial and score weighting of the detector responses to provide accurate instrument position, and update the tracker template.

framework

Qualitative Results

Retinal Microsurgery
Typical results obtained with our framework on Retinal Microsurgery images. White cross indicates ground truth location while red cross and associated bounding box indicate our framework's predicted (tracked) location. Note how our tracker succeeds despite the strong changes in appearance of the tools (perspective, scale, deformation).


Laparascopy
Typical results obtained with our framework on Laparascopy. White cross indicates ground truth location while red cross and associated bounding box indicate our framework's predicted (tracked) location.

COMING SOON!



Links

R. Sznitman, K. Ali, R. Richa, R. Taylor, G. Hager and P. Fua
In Medical Image Computing and Computer Assisted Intervention Conference, 2012.