Hostname: page-component-84b7d79bbc-g78kv Total loading time: 0 Render date: 2024-07-28T05:49:04.807Z Has data issue: false hasContentIssue false

Socialistic 3D tracking of humans from a mobile robot for a ‘human following robot’ behaviour

Published online by Cambridge University Press:  06 January 2023

Vaibhav Malviya*
Affiliation:
Department of Computer Science & Engineering, National Institute of Technology, Mizoram, India
Rahul Kala
Affiliation:
Centre of Intelligent Robotics, Department of Information Technology, Indian Institute of Information Technology, Allahabad, Prayagraj, India
*
*Corresponding author. E-mail: vaibsidea@gmail.com

Abstract

Robotic guides take visitors on a tour of a facility. Such robots must always know the position of the visitor for decision-making. Current tracking algorithms largely assume that the person will be nearly always visible. In the robotic guide application, a person’s visibility could be often lost for prolonged periods, especially when the robot is circumventing a corner or making a sharp turn. In such cases, a person cannot quickly come behind the limited field of view rear camera. We propose a new algorithm that can track people for prolonged times under such conditions. The algorithm is benefitted from an application-level heuristic that the person will be nearly always following the robot, which can be used to guess the motion. The proposed work uses a Particle Filter with a ‘follow-the-robot’ motion model for tracking. The tracking is performed in 3D using a monocular camera. Unlike approaches in the literature, the proposed work observes from a moving base that is especially challenging since a rotation of the robot can cause a large sudden change in the position of the human in the image plane that the approaches in the literature would filter out. Tracking in 3D can resolve such errors. The proposed approach is tested for three different indoor scenarios. The results showcase that the approach is significantly better than the baselines including tracking in the image and projecting in 3D, tracking using a randomized (non-social) motion model, tracking using a Kalman Filter and using LSTM for trajectory prediction.

Type
Research Article
Copyright
© The Author(s), 2023. Published by Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Schreier, M., Willert, V. and Adamy, J., “Bayesian, Maneuver-Based, Long-Term Trajectory Prediction and Criticality Assessment for Driver Assistance Systems,” In: 17th International IEEE Conference on Intelligent Transportation Systems (IEEE, 2014) pp. 334341.CrossRefGoogle Scholar
Deo, N., Rangesh, A. and Trivedi, M. M., “How would surround vehicles move? a unified framework for maneuver classification and motion prediction,” IEEE Trans. Intell. Veh. 3(2), 129140 (2018).Google Scholar
Zhang, T., Liu, S., Xu, C., Liu, B. and Yang, M. H., “Correlation particle filter for visual tracking,” IEEE Trans. Image Process. 27(6), 26762687 (2017).CrossRefGoogle Scholar
Chen, G., Cui, G., Kong, L., Guo, S. and Cao, L., “Particle-filter-based human target tracking in image domain for through-wall imaging radar,” J. Eng. 2019(20), 65496553 (2019).CrossRefGoogle Scholar
Luo, J. and Qin, S., “A fast algorithm of simultaneous localization and mapping for mobile robot based on ball particle filter,” IEEE Access 6, 2041220429 (2018).CrossRefGoogle Scholar
Lenser, S. and Veloso, M., “Sensor Resetting Localization for Poorly Modelled Mobile Robots,” In: IEEE International Conference on Robotics and Automation, (vol. 2, IEEE, 2000)(vol. 2, IEEE, 2000) pp. 12251232.Google Scholar
Jensfelt, P., Austin, D. J., Wijk, O. and Andersson, M., “Feature Based Condensation for Mobile Robot Localization,” In: IEEE International Conference on Robotics and Automation (vol. 3, IEEE, 2000)(vol. 3, IEEE, 2000) pp. 25312537.Google Scholar
Wengefeld, T., Müller, S., Lewandowski, B. and Gross, H. M., “A Multi Modal People Tracker for Real Time Human Robot Interaction,” In: 2019 28th IEEE International Conference on Robot and Human Interactive Communication (IEEE, 2019) pp. 18.Google Scholar
Liu, Z., Huang, J., Han, J., Bu, S. and Lv, J., “Human motion tracking by multiple RGBD cameras,” IEEE Trans. Circ. Syst. Video Technol. 27(9), 20142027 (2016).CrossRefGoogle Scholar
Yu, T., Zhao, J., Huang, Y., Li, Y. and Liu, Y., “Towards robust and accurate single-view fast human motion capture,” IEEE Access 7, 8554885559 (2019).CrossRefGoogle Scholar
Li, Y., Wang, Z., Yang, X., Wang, M., Poiana, S. I., Chaudhry, E. and Zhang, J., “Efficient convolutional hierarchical autoencoder for human motion prediction,” Vis. Comput. 35(6), 11431156 (2019).CrossRefGoogle Scholar
Malviya, V. and Kala, R., “Tracking Vehicle and Faces: Towards Socialistic Assessment of Human Behaviour,” In: 2018 Conference on Information and Communication Technology (IEEE, 2018) pp. 16.Google Scholar
Dönmez, E., Kocamaz, A. F. and Dirik, M., “A vision-based real-time mobile robot controller design based on gaussian function for indoor environment,” Arab. J. Sci. Eng. 43(12), 71277142 (2018).CrossRefGoogle Scholar
Dönmez, E. and Kocamaz, A. F., “Design of mobile robot control infrastructure based on decision trees and adaptive potential area methods,” Iran. J. Sci. Technol. Trans. Electr. Eng. 44(1), 431448 (2020).CrossRefGoogle Scholar
Chen, Y. F., Liu, M., Everett, M. and How, J. P., “Decentralized Non-Communicating Multiagent Collision Avoidance with Deep Reinforcement Learning,” In: 2017 IEEE International Conference on Robotics and Automation (IEEE, 2017) pp. 285292.Google Scholar
Everett, M., Chen, Y. F. and How, J. P., “Motion Planning Among Dynamic, Decision-Making Agents with Deep Reinforcement Learning,” In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE, 2018) pp. 30523059.CrossRefGoogle Scholar
Ferraguti, F., Landi, C. T., Secchi, C., Fantuzzi, C., Nolli, M. and Pesamosca, M., “Walk-through programming for industrial applications,” Procedia Manuf. 11, 3138 (2017).Google Scholar
Landi, C. T., Ferraguti, F., Secchi, C. and Fantuzzi, C., “Tool Compensation in Walk-Through Programming for Admittance-Controlled Robots,” In: IECON 2016-42nd Annual Conference of the IEEE Industrial Electronics Society (IEEE, 2016) pp. 53355340.CrossRefGoogle Scholar
Marchetti, F., Becattini, F., Seidenari, L. and Bimbo, A. D., “Multiple trajectory prediction of moving agents with memory augmented networks,” IEEE Trans. Pattern Anal. Mach. Intell. (2020).CrossRefGoogle ScholarPubMed
Bencherif, A. and Chouireb, F., “A recurrent TSK interval type-2 fuzzy neural networks control with online structure and parameter learning for mobile robot trajectory tracking,” Appl. Intell. 49(11), 38813893 (2019).CrossRefGoogle Scholar
Park, J. S., Park, C. and Manocha, D., “I-Planner: Intention-aware motion planning using learning-based human motion prediction,” Int. J. Robotics Res. 38(1), 2339 (2019).CrossRefGoogle Scholar
Jang, S., Elmqvist, N. and Ramani, K., “Motionflow: visual abstraction and aggregation of sequential patterns in human motion tracking data,” IEEE Trans. Vis. Comput. Graphics 22(1), 2130 (2015).CrossRefGoogle Scholar
Malviya, V., Reddy, A. K. and Kala, R., “Autonomous social robot navigation using a behavioral finite state social machine,” Robotica, 22662289 (2020).CrossRefGoogle Scholar
Reddy, A. K., Malviya, V. and Kala, R., “Social cues in the autonomous navigation of indoor mobile robots,” Int. J. Soc. Robot., 13351358 (2020).Google Scholar
Guo, S., Xu, H., Thalmann, N. M. and Yao, J., “Customization and fabrication of the appearance for humanoid robot,” Vis. Comput. 33(1), 6374 (2017).CrossRefGoogle Scholar
Liang, C., Zhang, Z., Lu, Y., Zhou, X., Li, B., Ye, X. and Zou, J., “Rethinking the competition between detection and reid in multi-object tracking, arXiv preprint arXiv:2010.12138, 2020.Google Scholar
Zhang, Y., Wang, C., Wang, X., Zeng, W., W and Liu, W., “Fairmot: on the fairness of detection and re-identification in multiple object tracking, arXiv preprint arXiv:2004.01888, 2020.Google Scholar
Wang, Z., Zheng, L., Liu, Y., Li, Y. and Wang, S., “Towards real-time multi-object tracking,” Computer Vision – ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XI 16, 107122 (2020).CrossRefGoogle Scholar
Bera, A. and Manocha, D., “Realtime Multilevel Crowd Tracking Using Reciprocal Velocity Obstacles,” In: 2014 22nd International Conference on Pattern Recognition (2014) pp. 41644169.Google Scholar
Bimbo, A. D. and Dini, F., “Particle filter-based visual tracking with a first order dynamic model and uncertainty adaptation,” Comput. Vis. Image Underst. 115(6), 771786 (2011).Google Scholar
Knobloch, A., Vahrenkamp, N., Wächter, M. and Asfour, T., “Distance-aware dynamically weighted roadmaps for motion planning in unknown environments,” IEEE Robot. Autom. Lett. 3(3), 20162023 (2018).Google Scholar
Brown, S. and Waslander, S. L., “The constriction decomposition method for coverage path planning,” In: 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE, 2016) pp. 32333238.Google Scholar
Orozco-Rosas, U., Picos, K. and Montiel, O., “Hybrid path planning algorithm based on membrane pseudo-bacterial potential field for autonomous mobile robots,” IEEE Access 7, 156787156803 (2019).CrossRefGoogle Scholar
Paliwal, S. S. and Kala, R., “Maximum clearance rapid motion planning algorithm,” Robotica 36(6), 882903 (2018).CrossRefGoogle Scholar
Bevilacqua, P., Frego, M., Fontanelli, D. and Palopoli, L., “Reactive planning for assistive robots,” IEEE Robot. Automat. Lett. 3(2), 12761283 (2018).CrossRefGoogle Scholar
Kala, R., “On repelling robotic trajectories: coordination in navigation of multiple mobile robots,” Intel. Serv. Robot. 11(1), 7995 (2018).CrossRefGoogle Scholar
Dirik, M., Kocamaz, A. F. and Dönmez, E., “Visual servoing based control methods for non-holonomic mobile robot,” J. Eng. Res. 8(2) (2020).Google Scholar
Repiso, E., Garrell, A. and Sanfeliu, A., “Adaptive side-by-side social robot navigation to approach and interact with people,” Int. J. Soc. Robot. 12(4), 909930 (2020).CrossRefGoogle Scholar
Malviya, A. and Kala, R., “Social robot motion planning using contextual distances observed from 3D human motion tracking,” Expert Syst. Appl. 115515 (2021).CrossRefGoogle Scholar
Malviya, A. and Kala, R., “Learning-based simulation and modeling of unorganized chaining behavior using data generated from 3D human motion tracking,” Robotica, 126 (2021).Google Scholar
Kumar, U., Banerjee, A. and Kala, R., “Collision avoiding decentralized sorting of robotic swarm,” Appl. Intell. 50(4), 13161326 (2020).Google Scholar
Lasota, P. A., Fong, T. and Shah, J. A., “A survey of methods for safe human-robot interaction,” Found. Trends Robot. 5(4), 261349 (2017).Google Scholar
Zhao, T., Xu, Y. and Monfort, M., “Multi-Agent Tensor Fusion for Contextual Trajectory Prediction,” In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2019) pp. 1212612134.CrossRefGoogle Scholar
Rhinehart, N., McAllister, R., Kitani, K. and Levine, S., “PRECOG: prediction conditioned on goals in visual multi-agent settings,” In: Proceedings of the IEEE International Conference on Computer Vision (IEEE, 2019) pp. 28212830.Google Scholar
Rudenko, A., Palmieri, L., Herman, M., Kitani, K. M., Gavrila, D. M. and Arras, K. O., “Human motion trajectory prediction: a survey,” Int. J. Robot. Res. 39(8), 895935 (2020).CrossRefGoogle Scholar
Jain, R., Semwal, V. B. and Kaushik, P., “Stride segmentation of inertial sensor data using statistical methods for different walking activities,” Robotica, 114 (2021).Google Scholar
Semwal, V. B., Gaud, N., Lalwani, P., Bijalwan, V. and Alok, A. K., “Pattern identification of different human joints for different human walking styles using inertial measurement unit (IMU) sensor,” Artif. Intell. Rev., 121 (2021).Google Scholar
Gupta, A. and Semwal, V. B., “Occluded Gait reconstruction in multi person Gait environment using different numerical methods,” Multimed. Tools Appl. 81(16), 128 (2022).CrossRefGoogle Scholar
Rudenko, A., Palmieri, L., Lilienthal, A. J. and Arras, K. O., “Human Motion Prediction Under Social Grouping Constraints,” In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2018) pp. 33583364.Google Scholar
Linder, T., Breuers, S., Leibe, B. and Arras, K. O., “On Multi-Modal People Tracking From Mobile Platforms in Very Crowded and Dynamic Environments,” In: IEEE International Conference On Robotics and Automation (ICRA) (2016) pp. 55125519.Google Scholar
Swaminathan, C. S., Kucner, T. P., Magnusson, M., Palmieri, L. and Lilienthal, A. J., “Down the Cliff: flow-aware Trajectory Planning Under Motion Pattern Uncertainty,” In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2018) pp. 74037409.Google Scholar
Bera, A., Randhavane, T. and Manocha, D., “Aggressive, Tense or Shy? Identifying Personality Traits from Crowd Videos,” In: IJCAI (2017) pp. 112118.Google Scholar
Ma, W. C., Huang, D. A., Lee, N. and Kitani, K. M., “Forecasting Interactive Dynamics of Pedestrians with Fictitious Play,” In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017) pp. 774782.Google Scholar
Li, G. H., Chang, C. F. and Fu, L. C., “Navigation of a Wheeled Mobile Robot in Indoor Environment by Potential Field Based-Fuzzy Logic Method,” In: 2008 IEEE Workshop on Advanced Robotics and Its Social Impacts, (2008) pp. 16.Google Scholar
Choi, C. and Christensen, H. I., “Robust 3D visual tracking using particle filtering on the special Euclidean group: a combined approach of keypoint and edge features,” Int. J. Robot. Res. 31(4), 498519 (2012).CrossRefGoogle Scholar
Zhang, K., Zhang, Z., Li, Z. and Qiao, Y., “Joint face detection and alignment using multitask cascaded convolutional networks,” IEEE Signal Process. Lett. 23(10), 14991503 (2016).Google Scholar
Yang, S., Luo, P., Loy, C.-C. and Tang, X., “Wider Face: A Face Detection Benchmark,” In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016) pp. 55255533.Google Scholar
Liu, Z., Luo, P., Wang, X. and Tang, X., “Deep Learning Face Attributes in the Wild,” In: Proceedings of the IEEE International Conference on Computer Vision (2015) pp. 37303738.Google Scholar
Li, H., Lin, Z., Shen, X., Brandt, J. and Hua, G., “A Convolutional Neural Network Cascade for Face Detection,” In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2015) pp. 53255334.CrossRefGoogle Scholar
He, K., Zhang, X., Ren, S. and Sun, J., “Delving Deep Into Rectifiers: Surpassing Human-Level Performance on Imagenet Classification,” In: Proceedings of the IEEE International Conference on Computer Vision (IEEE, 2015) pp. 10261034.CrossRefGoogle Scholar
Krizhevsky, A., Sutskever, I. and Hinton, G. E., “Imagenet Classification with Deep Convolutional Neural Networks,” In: Advances in Neural Information Processing Systems (2012) pp. 10971105.Google Scholar

Malviya and Kala supplementary material

Malviya and Kala supplementary material

Download Malviya and Kala supplementary material(Video)
Video 33.9 MB