Multi-Sensor Surveillance

This work involved development of a novel surveillance system that incorporates multiple active-vision sensors controlled by a real-time dispatching algorithm. The proposed system is targeted for visual-servoing and other similar applications that require accurate and reliable target surveillance – tracking and state estimation. The superiority of multiple active-vision surveillance systems over those with single or static sensors has been well established in the literature.Different approaches to the problem of sensor dispatching and control have also been previously proposed. However, these approaches have primarily relied on off-line planning and, occasionally, utilized on-line planning to compensate for unexpected variations in the target’s trajectory.

The two-step method developed in this work, on the other hand, uses a real-time dispatching algorithm, which eliminates the need for any a priori knowledge about the target’s trajectory. First, via a heuristic approach, each sensor’s optimal viewing location is determined such that the uncertainty associated with the sensor reading is minimized. Next, via a dispatching approach, the optimal subset of all “available” sensors is selected and assigned for the surveillance of the object at the desired demand instance. Data fusion is carried out to merge information acquired from the multiple sensors. The experimental surveillance system developed for implementation of the above mentioned algorithm,  consists of a static overhead camera to estimate the target’s state, via a Kalman filter, and four mobile cameras for the surveillance of the target moving on a plane. Target motion is achieved by placing the object on an x-y table and pre-programming its path that is not known to the surveillance system. The objects (future) pose prediction is fed to the dispatching algorithm, which determines the optimal position and bearing of the four surveillance cameras.  The selected cameras are moved into position and used to independently estimate the target’s pose at the desired time instant. The target data obtained from the cameras with their own position and bearing are fed to a fusion algorithm, where the final assessment of the object’s pose is obtained.

Experiments using the above described system have shown that the use of dynamic sensors along with a dispatching algorithm can improve the performance of a surveillance system, primarily, due to the following factors:

  1. decrease in the uncertainty associated with the object’s estimated pose
  2. increase in robustness of the system due to its ability to cope with a wider range of a priori unknown object trajectories, and
  3. increase in reliability through sensory fault tolerance.