[Objective] With the rapid development of remote sensing technology, remote sensing images have become a crucial data source for fields such as surface observation, environmental monitoring, and natural disaster prediction. However, the acquisition of remote sensing images is often affected by atmospheric conditions, particularly weather phenomena like haze and cloud cover, which degrade image quality and pose challenges to subsequent analysis and processing tasks. The presence of haze significantly reduces the contrast, color, and clarity of remote-sensing images, thereby impairing the extraction and identification of ground features. Consequently, effectively removing haze from remote-sensing images has become a focal point of interest for academia and industry. Haze removal is especially critical in agriculture, environmental protection, and urban planning, where high-quality remote sensing data is essential for monitoring crop growth, assessing soil quality, and predicting natural disasters. In recent years, the rise of deep learning has brought new possibilities for haze removal in remote-sensing images. The introduction of attention mechanisms, in particular, has allowed models to better capture and utilize important features within images, significantly improving dehazing performance. However, despite these advancements, traditional channel attention mechanisms typically rely on global average pooling to aggregate feature information. While this approach simplifies computational complexity, it is less effective when dealing with images that exhibit significant local variations and are sensitive to outliers. Additionally, remote sensing images often cover vast areas with diverse terrains, complex landforms, and dramatic spectral variations, making haze patterns more complex and uneven. Developing more efficient and adaptive dehazing methods that can fully account for local and global features in remote sensing images is a key direction for the future development of dehazing technology. [Method] Therefore, to address this issue, this paper proposes a Hybrid Attention-Based Generative Adversarial Network (HAB-GAN), which integrates an Efficient Channel Attention (ECA) module and a Spatial Attention Block (SAB). By merging feature extraction from both channel and spatial dimensions, the model effectively enhances its ability to identify and recover hazy areas in remote sensing images. In HAB-GAN, the Efficient Channel Attention (ECA) module captures local cross-channel interactions, addressing the shortcomings of traditional global average pooling in terms of insufficient sensitivity to local detail information. The ECA module uses a global average pooling strategy without dimensionality reduction, automatically adapting to the characteristics of each channel without introducing extra parameters, thereby enhancing the inter-channel dependencies. ECA employs a one-dimensional convolution operation, which uses a learnable kernel size to adaptively determine the range of channel interactions. This design effectively avoids the over-smoothing of global features common in traditional pooling layers, allowing the model to more precisely extract local details while maintaining low computational complexity. The SAB module introduces a weighted mechanism on the spatial dimension by constructing a spatial attention map to enhance the model's ability to identify hazy areas in the image. This module extracts feature maps through convolution operations and applies attention weighting in both horizontal and vertical directions, highlighting regions with severe haze, allowing the model to better capture spatial information in the image, thereby enhancing dehazing performance. The generator of HAB-GAN combines residual network structures with hybrid attention modules. It first extracts initial features from input images through convolutional layers and then passes these features through several residual blocks. The residual blocks effectively mitigate the vanishing gradient problem in deep neural networks and maintain feature consistency and continuity by passing input features directly to deeper network layers through skip connections. Each residual block incorporates ECA and SAB modules, enabling precise feature learning through weighted processing in both channel and spatial dimensions. After extracting effective features, the generator generates dehazed images through convolution operations. The discriminator adopts a standard convolutional neural network architecture, focusing on extracting local detail features from the images generated by the generator. It consists of multiple convolutional layers, batch normalization layers, and Leaky ReLU activation functions. By extracting local features layer by layer and down-sampling, the discriminator progressively reduces the spatial resolution of the images, evaluating their realism at both global and local levels. The generator and discriminator are jointly optimized through adversarial training, where the generator aims to produce increasingly realistic dehazed images, and the discriminator continually improves its ability to distinguish between real and generated images, thereby enhancing the learning effectiveness and image quality of the generator. [Results and Discussions] To validate the effectiveness of HAB-GAN, extensive experiments were conducted on the RESISC45 dataset. The experimental results demonstrate that compared to existing dehazing models, HAB-GAN excels in key evaluation metrics such as Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM). Specifically, compared to SpA GAN, HAB-GAN improves PSNR by 2.64 dB and SSIM by 0.012 2; compared to HyA-GAN, PSNR improves by 1.14dB and SSIM by 0.001 9. Additionally, to assess the generalization capability of HAB-GAN, further experiments were conducted on the RICE2 dataset to verify its performance in cloud removal tasks. The results show that HAB-GAN also performs exceptionally well in cloud removal tasks, with PSNR improving by 3.59 dB and SSIM improving by 0.040 2. Compared to HyA-GAN, PSNR and SSIM increased by 1.85 dB and 0.012 4, respectively. To further explore the impact of different modules on the model's performance, ablation experiments were designed, gradually removing the ECA module, the SAB module, and the entire hybrid attention module. The experimental results show that removing the ECA module reduces PSNR by 2.64 dB and SSIM by 0.012 2; removing the SAB module reduces PSNR by 2.96 dB and SSIM by 0.008 7; and removing the entire hybrid attention module reduces PSNR and SSIM by 3.87 dB and 0.033 4, respectively. [Conclusions] This demonstrates that the proposed HAB-GAN model not only performs excellently in dehazing and declouding tasks but also significantly enhances the clarity and detail recovery of dehazed images through the synergistic effect of the Efficient Channel Attention (ECA) module and the Spatial Attention (SAB) module. Additionally, its strong performance across different remote sensing datasets further validates its effectiveness and generalization ability, showcasing broad application potential. Particularly in fields such as agriculture, environmental monitoring, and disaster prediction, where high-quality remote sensing data is crucial, HAB-GAN is poised to become a valuable tool for improving data reliability and supporting more accurate decision-making and analysis.