Το work with title Reconstructing spatiotemporal trajectories from sparse data by Partsinevelos Panagiotis, P. Agouris , A. Stefanidis is licensed under Creative Commons Attribution 4.0 International
Bibliographic Citation
P. Partsinevelos, P. Agouris and A. Stefanidis, “Reconstructing Spatiotemporal Trajectories from Sparse Data”, in Journal of Photogrammetry and Remote Sensing
vol. 60, no. 1, pp. 3–16, Dec. 2005, doi:10.1016/j.isprsjprs.2005.10.004
https://doi.org/10.1016/j.isprsjprs.2005.10.004
In motion imagery-based tracking applications, it is common to extract locations of moving objects without any knowledge about the identity of the objects they correspond to. The identification of individual spatiotemporal trajectories from such data sets is far from trivial when these trajectories intersect in space, time, or attributes. In this paper, we present a novel approach for the reconstruction of entangled spatiotemporal trajectories of moving objects captured in motion imagery data sets. We have developed ACENT (Attribute-aided Classification of Entangled Trajectories), a novel framework that comprises classification, clustering, and neural net processes to progressively reconstruct elongated trajectories using as input spatiotemporal coordinates of image patches and corresponding attribute values. ACENT proceeds by first forming brief fragments and then linking them and adding points to them. An initial classification allows us to form brief segments corresponding to distinct objects. These segments are then linked together through clustering to form longer trajectories. Back-propagation neural network classification and geometric/self-organizing map (SOM) analysis refine these trajectories by removing misclassified and redistributing unassigned points. Thus, ACENT integrates some established classification and clustering tools to devise a novel approach that can address the tracking challenges of busy environments. Furthermore, ACENT allows us use spatiotemporal (ST) thresholds to cluster trajectories according to their spatial and temporal extent. In the paper, we present in detail our framework and experimental results that support the application potential of our approach.