Ιδρυματικό Αποθετήριο
Πολυτεχνείο Κρήτης
EN  |  EL

Αναζήτηση

Πλοήγηση

Ο Χώρος μου

Virtual video synthesis for personalized training

Markolefas Filippos, Moirogiorgou Konstantia, Giakos George C., Zervakis Michail

Απλή Εγγραφή


URIhttp://purl.tuc.gr/dl/dias/3B1D8B2D-B61C-49B3-9523-97B9FC86CF05-
Αναγνωριστικόhttps://doi.org/10.1109/IST.2018.8577097-
Αναγνωριστικόhttps://ieeexplore.ieee.org/document/8577097-
Γλώσσαen-
Μέγεθος6 pagesen
ΤίτλοςVirtual video synthesis for personalized trainingen
ΔημιουργόςMarkolefas Filipposen
ΔημιουργόςΜαρκολεφας Φιλιπποςel
ΔημιουργόςMoirogiorgou Konstantiaen
ΔημιουργόςΜοιρογιωργου Κωνσταντιαel
ΔημιουργόςGiakos George C.en
ΔημιουργόςZervakis Michailen
ΔημιουργόςΖερβακης Μιχαηλel
ΕκδότηςInstitute of Electrical and Electronics Engineersen
ΠερίληψηOnline personal training allows users to work out from the comfort of their own homes using workout videos designed by fitness instructors. Users of such applications can use their device (PC, laptop, smart TV, etc.) camera and work out with others in a group setting, enabling plethora of intertwined benefits. In order to enhance training efficiency, it could be helpful for the trainee to superimpose his/her human silhouette, giving the opportunity to easily detect the differences of his/her exercise over the trainer's movements. One way to proceed towards this direction is to have a camera recording the video of the trainee during the exercise, which should be presented in contrast to the instructor's video on the device screen. In this work, we explore this direction and present traditional background estimation approaches in combination with foreground extraction techniques using videos recorded with static cameras. It is shown that none of the presented methods is able to efficiently face all possible challenges, like slow moving object (foreground) or presence of the moving object at the phase of background initialization, problems that mainly appear in in yoga exercise. As an alternative, we propose a series of techniques including an initial background reconstruction method followed by a selective updating scheme. In this way, the background image adaptively converges to the ground truth data enabled by the merging of information from detected moving regions (temporal processing) and color-based regions (spatial processing) of the video segment. Finally, we also apply the proposed method in space surveillance applications, using surveillance cameras, in order to evaluate the generality and efficiency of the proposed approach. en
ΤύποςΠλήρης Δημοσίευση σε Συνέδριοel
ΤύποςConference Full Paperen
Άδεια Χρήσηςhttp://creativecommons.org/licenses/by/4.0/en
Ημερομηνία2019-05-21-
Ημερομηνία Δημοσίευσης2018-
Θεματική ΚατηγορίαFusion of temporal and spatial informationen
Θεματική ΚατηγορίαImage background reconstructionen
Θεματική ΚατηγορίαMotion trackingen
Θεματική ΚατηγορίαSilhouette extractionen
Θεματική ΚατηγορίαVideo processingen
Θεματική ΚατηγορίαImage segmentationen
Θεματική ΚατηγορίαObject recognitionen
Βιβλιογραφική ΑναφοράF. Markolefas, K. Moirogiorgou, G. Giakos and M. Zervakis, "Virtual video synthesis for personalized training," in IEEE International Conference on Imaging Systems and Techniques, 2018. doi: 10.1109/IST.2018.8577097el

Υπηρεσίες

Στατιστικά