Surveillance Architecture for Human Activity Recognition using Unmanned Aerial Vehicle / Arquitetura de vigilância para reconhecimento de atividade humana usando veículo aéreo não tripulado

Milena F. Pinto, Aurelio Gouvêa de Melo, Guilherme Marins, Iago Z. Biundini, André L. M. Marcato

Resumo


There is intensive growth in researches regarding surveillance and threat detection. Surveillance tasks often involve several actors with multiple interactions. Thus, modeling a complex activity becomes challenging. This work proposes an architecture comprised of low, middle, and high levels. The low-level recognizes characteristics, positioning of objects, and time of occurrences utilizing a camera and Unmanned Aerial Vehicle (UAV) sensors. The middle-level is responsible for structuring the information from the low-level using Deterministic Finite Automata (DFA). An expert system attached in the high-level module performs inference over the organized information to enables the system to have simple reasoning modules, assisting the operator decision. The architecture is embedded in a UAV to reduce the number of cameras and to reach difficult areas. The experiments showed that the proposed system updated the grammatical structure effectively, given a sequence of information computed by the vision modules.


Palavras-chave


Intelligent Systems, UAV, Robotic Systems, Surveillance, Semi-Autonomous Mission.

Texto completo:

PDF

Referências


Biundini, I. Z., Melo, A. G., Pinto, M. F., Marins, G. M., Marcato, A. L., & Honorio, L. M. (2019, November). Coverage Path Planning Optimization for Slopes and Dams Inspection. In Iberian Robotics conference (pp. 513-523). Springer.

Budiyanto, A., Cahyadi, A., Adji, T.B., and Wahyunggoro, O. (2015). UAV obstacle avoidance using potential field under dynamic environment. In 2015 International Conference on Control, Electronics, Renewable Energy and Communications (ICCEREC), 187–192.

Chakrabarty, A., Morris, R., Bouyssounouse, X., and Hunt, R. (2016). Autonomous indoor object tracking with the parrot ar. drone. In 2016 International Conference on Unmanned Aircraft Systems (ICUAS), 25–30.

Chomsky, N. and Lightfoot, D.W. (2002). Syntactic structures, volume 1. Walter de Gruyter.

Corteville, B., Aertbelien, E., Bruyninckx, H., Schutter, J.D., and Brussel, H.V. (2007). Human-inspired robot assistant for fast point-to-point movements. In Proceedings 2007 IEEE International Conference on Robotics and Automation, 3639–3644.

Csurka, G., Dance, C.R., Fan, L., and Willamowski, J. (2004). Visual categorization with bags of keypoints. In eccv 2004.

Cuntoor, N.P. and Chellappa, R. (2007). Mixed-state models for nonstationary multiobjective activities. EURASIP Journal on Advances in Signal Processing, 2007(1), 106– 106.

Gotzy, M., Hetenyi, D., and Blazovics, L. (2016). Aerial surveillance system with cognitive swarm drones. In 2016 Cybernetics Informatics (KI), 1–6.

Hongeng, S., Nevatia, R., and Bremond, F. (2004). Video-based event recognition: activity representation and probabilistic recognition methods. Computer Vision and Image Understanding, 96(2), 129–162.

Kulic, D., Ott, C., Lee, D., Ishikawa, J., and Nakamura, Y. (2012). Incremental learning of full body motion primitives and their sequencing through human motion observation. The International Journal of Robotics Research, 31(3), 330–345.

Pantic, M. and Rothkrantz, L.J.M. (2000). An expert system for recognition of facial actions and their intensity. In Proceedings of the Seventeenth National Conference on Artificial Intelligence and Twelfth Conference on Innovative Applications of Artificial Intelligence, 1026– 1033.

Pinto, M.F., Melo, A.G., Marcato, A.L.M., and Urdiales, C. (2017). Case-based reasoning approach applied to surveillance system using an autonomous unmanned aerial vehicle. In 2017 IEEE 26th International Symposium on Industrial Electronics (ISIE), 1324–1329.

Pinto, M.F., Melo, A.G., Marcato, A.L.M., and Urdiales, C. (2018). Remoção dinâmica de plano de fundo em imagens aéreas em movimento. In In XXII Congresso Brasileiro de Automática.

Pinto, Milena F. et al. (2019) A framework for analyzing fog-cloud computing cooperation applied to information processing of UAVs. Wireless Communications and Mobile Computing, v. 2019.

Riley, G. (1991). Clips: An expert system building tool. NASA Technology 2001.

Schlenzig, J., Hunter, E., and Jain, R. (1994). Recursive identification of gesture inputs using hidden markov models. In Proceedings of 1994 IEEE Workshop on Applications of Computer Vision, 187–194.

Snidaro, L., Belluz, M., and Foresti, G.L. (2007). Representing and recognizing complex events in surveillance applications. In 2007 IEEE Conference on Advanced Video and Signal Based Surveillance, 493–498.

Snidaro, L., Belluz, M., and Foresti, G.L. (2009). Modelling and managing domain context for automatic surveillance systems. In 2009 Sixth IEEE International Conference on Advanced Video and Signal Based Surveillance, 238–243.

Turaga, P.K., Chellappa, R., Subrahmanian, V.S., and Udrea, O. (2008). Machine recognition of human activities: A survey. IEEE Transactions on Circuits and Systems for Video Technology, 18(11), 1473–1488.

Vishwakarma, S. and Agrawal, A. (2013). A survey on activity recognition and behavior understanding in video surveillance. The Visual Computer, 29(10), 983– 1009.

Yang, Y., Guha, A., Fermuller, C., Aloimonos, Y., and Williams, A.V. (2014). A cognitive system for under- standing human manipulation actions. Advances in Cognitive Systems, 3, 67–86.

Yang, Y., Li, Y., Fermuller, C., and Aloimonos, Y. (2015). Robot learning manipulation action plans by watching unconstrained videos from the world wide web. In AAAI’15 Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, 3686–3692.

Yilmaz, A. and Shah, M. (2005). Actions sketch: a novel action representation. In 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), volume 1, 984–989.




DOI: https://doi.org/10.34115/basrv4n3-027

Apontamentos

  • Não há apontamentos.