STEMM Institute Press
Science, Technology, Engineering, Management and Medicine
Tracking and Measuring Explosion Points with High-Resolution Reconstruction under Binocular Occlusion
DOI: https://doi.org/10.62517/jbdc.202401102
Author(s)
Shipeng Cheng, Meili Zhou, Zongwen Bai*
Affiliation(s)
School of Physics and Electronic Information, Yan'an University, Yan’an, Shaanxi, China *Corresponding Author.
Abstract
This paper leverages data and projects from Group A to enhance the application of bomb impact point tracking and measurement using binocular vision. The research involved gathering bomb impact measurement data across various mountain peaks of differing elevations, Using binocular drones to collect data. Nevertheless, challenges such as bomb impact overlap and occlusion within the video data were identified. To tackle these equipment-related obstacles, including bomb occlusion and camera overlap issues, remote sensing image reconstruction networks were utilized to reconstruct bomb impact images that exhibited partial overlap. The processed imagery data was annotated utilizing the labelimg annotation tool, in collaboration with the OpenCV data processing utility, for precise labeling of bomb impact images. Moreover, a multi-object tracking network was developed and trained for the effective tracking of bombs. The central aim of this research is to regress the world coordinates of initial bomb impact points by employing bomb point localization algorithms and image regression networks dedicated to bomb measurement. Furthermore, this paper delves into the inaccuracies found within target point measurements and undertakes an error analysis predicated on the information pertaining to the target. To enhance the operational capability of the airborne observation platform, the research entailed the relocation of the electro-optical pod to a predetermined position, followed by the remote transmission of the gathered data to ground-based equipment. The ground-based equipment is designed to configure parameters, control the electro-optical pod, receive commands, and process image data for conducting intersection measurement calculations. The electro-optical pod itself facilitates high-speed measurements of target impact positions across visible light, infrared, and laser modes, additionally offering capabilities for local data storage. The pod's attitude self-stabilization was accomplished with gyroscopes. Meanwhile, the ground equipment facilitates remote control, parameter setting, data reception, and the execution of intersection measurement calculations based on image data.
Keywords
Super-resolution Reconstruction; Explosion Point Measurement; Binocular Vision; Linear Regression; Error Analysis
References
[1]Ash, Jordan T. and Ryan P. Adams. “On Warm-Starting Neural Network Training.” arXiv: Learning (2019): n. pag. [2]Irwan Bello, William Fedus, Xianzhi Du, Ekin D Cubuk, Aravind Srinivas, Tsung-Yi Lin, Jonathon Shlens, and Barret Zoph. Revisiting resnets: Improved training and scaling strategies. arXiv preprint arXiv:2103.07579, 2021. [3]Cubuk, Ekin Dogus et al. “Randaugment: Practical automated data augmentation with a reduced search space.” 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (2019): 3008-3017. [4]Dai T, Cai J, Zhang Y ,et al. Second-order Attention Network for Single Image Super-Resolution // 2019 IEEE / CVF Conference on Computer Vision and Pattern Recognition (CVPR).IEEE, 2019. DOI:10.1109/CVPR.2019.01132. [5]Dai, Tao, Hua Zha, Yong Jiang and Shutao Xia. “Image Super-Resolution via Residual Block Attention Networks.” 2019 IEEE / CVF International Conference on Computer Vision Workshop (ICCVW) (2019): 3879-3886. [6]Zamir S W, Arora A, Khan S,et al. Restormer: Efficient Transformer for High Resolution Image Restoration//2021. DOI:10.48550/arXiv.2111.09881. [7]Zhou D, Yu Z, Xie E,et al. Understanding The Robustness in Vision Transformers. 2022. DOI:10.48550/arXiv.2204.12451. [8]Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., & Houlsby, N. (2020). An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. ArXiv, abs/2010.11929. [9]Wenxiao Wang, Lu Yao, Long Chen, Binbin Lin, Deng Cai, Xiaofei He, and Wei Liu. Crossformer: A versatile vision transformer hinging on cross-scale attention. In ICLR, 2022. [10]Tu Z, Talebi H, Zhang H, et al. MaxViT: Multi-Axis Vision Transformer. arXiv e-prints, 2022. DOI: 10.48550/arXiv.2204.01697. [11]Tu Z, Talebi H,Zhang H, et al. MAXIM: Multi-Axis MLP for Image Processing. 2022. DOI: 10.48550/arXiv.2201.02973. [12]He, Tong, Zhi Zhang, Hang Zhang, Zhongyue Zhang, Junyuan Xie and Mu Li. “Bag of Tricks for Image Classification with Convolutional Neural Networks.” 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2018): 558-567. [13]He X, Mo Z, Wang P, et al. ODE-Inspired Network Design for Single Image Super-Resolution // 2019 IEEE / CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2020.DOI:10.1109/CVPR.2019.00183. [14]Dan Hendrycks and Kevin Gimpel. Gaussian error linear units (gelus). arXiv preprint arXiv: 1606.08415, 2016. [15]Hu J, Shen L, Sun G, et al. Squeeze-and-Excitation Networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, PP (99). DOI: 10.1109/TPAMI.2019.2913372. [16]Huang G, Sun Y, Liu Z, et al. Deep Networks with Stochastic Depth. Springer International Publishing, 2016. DOI: 10.1007/978-3-319-46493-0_39. [17]Kim J, Lee J K, Lee K M .Accurate Image Super-Resolution Using Very Deep Convolutional Networks. IEEE, 2016. DOI: 10.1109/CVPR.2016.182. [18]Li Z, Yang J, Liu Z, et al. Feedback Network for Image Super-Resolution // 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 0 [2024-03-07]. DOI: 10.1109/CVPR.2019.00399. [19]Liang, Jingyun et al. “SwinIR: Image Restoration Using Swin Transformer.” 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW) (2021): 1833-1844. [20]Lim, Bee, Sanghyun Son, Heewon Kim, Seungjun Nah and Kyoung Mu Lee. “Enhanced Deep Residual Networks for Single Image Super-Resolution.” 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (2017): 1132-1140.
Copyright @ 2020-2035 STEMM Institute Press All Rights Reserved