Abstract:
To address the loss of detail and reduced contrast in foggy images caused by cross-layer feature mismatch between encoders and decoders in existing deep learning algorithms, an image defogging algorithm that integrates multi-scale residuals with attention mechanisms was proposed. Firstly, a multi-scale residual-aware subsampling module was designed at the encoder side. which enhanced detailed feature extraction capability through multi-branch parallel convolutions combined with residual connections and efficient channel attention mechanisms. Secondly, an adaptive fine-grained channel attention mechanism was introduced to dynamically adjust feature weights between global and local information, thereby suppressing contrast degradation. Experimental results on the RESIDE-6K, SOTS, and HSTS datasets demonstrate that the proposed algorithm outperforms six mainstream comparative algorithms in both subjective visual effects and objective metrics such as peak signal-to-noise Rratio (PSNR) and structural similarity Iindex (SSIM), while exhibiting strong generalization capability. Specifically, PSNR and SSIM values of 28.51 dB and 0.965 1 are achieved on the RESIDE−6K test set, 27.82 dB and 0.965 5 on the SOTS test set, and 29.30 dB and 0.960 1 on the HSTS test set. Through the effective integration of multi-scale residual and attention mechanisms, dual optimization of detail preservation and contrast recovery is realized, providing an effective technical solution for enhancing the quality of hazy images.