Applying New Deep Learning Method to Multi-Spectral Image Fusion

  • A. Ashrith, K. Anusri, B. Reshwik, T. Aruna Sri

Abstract

CNN model plays an important role to extract the features of an image and a new  deep neural network, an efficient Infrared (IR) and Visible Image fusion process (VIS) are applied to the image. In our system, a Siamese Convolutional Neural Network (CNN) is used to construct a weight plan that represents the output of every pixel for a couple of source images automatically. A CNN plays a role when an image is automatically encoded into a classification function domain. The key issue of image fusion, which is activity level calculation and fusion rule design, can be defined in one shot by applying the proposed procedure. The Image fusion process is perceptible to the human visual system by the multi-scale decomposition of images based on a wavelet transformation. In addition, by comparing pedestrian detection outcomes with other approaches using the YOLOv3 object detector using a public comparison dataset, the visual qualitative efficiency of the proposed fusion approach is assessed. The experimental results indicate that by using Relu model to the fused image we are achieving best results when compared to other activation functions. This approach shows both quantitative evaluation and visual quality competitive outcomes.

Published
2020-05-28
How to Cite
A. Ashrith, K. Anusri, B. Reshwik, T. Aruna Sri. (2020). Applying New Deep Learning Method to Multi-Spectral Image Fusion. International Journal of Advanced Science and Technology, 29(05), 9441-9446. Retrieved from http://sersc.org/journals/index.php/IJAST/article/view/19040