Research on Radar Echo Extrapolation Method Based on Sparse Attention
Article
Figures
Metrics
Preview PDF
Reference
Related
Cited by
Materials
Abstract:
The deep learning-based radar echo extrapolation method is widely applied to the challenging task of short-term precipitation forecasting. However, existing methods still face issues with prediction accuracy, and when dealing with high-resolution and long-time sequence data, the training speed tends to be slow. To address these problems, this paper proposes a deep learning model based on sparse fusion attention - PFA-TransUNet (ProbSparse Fusion Attention TransUNet). This model is an encoder-decoder architecture, where a multi-layer Transformer is introduced in the encoder path. It then decomposes the traditional multi-head self-attention mechanism into computations in the spatiotemporal dimensions, allowing for the full integration of spatiotemporal information. In addition, the sparse attention method is incorporated to reduce the computational complexity of self-attention, significantly shortening the training time. Experimental results on the Hebei Province radar dataset show that compared to other advanced classical models, PFA-TransUNet outperforms them in various evaluation metrics such as extrapolation accuracy, Mean Squared Error (MSE), Structural Similarity Index (SSIM), Critical Success Index (CSI) at 20, 30, and 40 dBz, and training speed. The model demonstrates exceptional overall performance. In recent years, radar echo extrapolation becomes an increasingly important approach in precipitation forecasting, especially for nowcasting (short-term forecasting) tasks, where the ability to predict precipitation with high accuracy and efficiency is critical. However, due to the complex spatiotemporal nature of radar echoes, previous methods struggle to efficiently capture both spatial and temporal dependencies, which leads to suboptimal forecasting results. Furthermore, the computational cost associated with high-resolution and long-time series data further hampers the efficiency of current deep learning models. PFA-TransUNet addresses these limitations by incorporating a sparse attention mechanism, which helps reduce the computational load without sacrificing model performance. Traditional self-attention mechanisms in Transformer models can be computationally expensive due to the quadratic complexity of attention calculations, especially when applied to large datasets. By leveraging sparse attention, the proposed model focuses on the most relevant parts of the input data, thus improving computational efficiency and speeding up training. Another key feature of PFA-TransUNet is its ability to effectively model spatiotemporal dependencies. By decomposing the multi-head self-attention into spatiotemporal dimensions, the model captures the intricate relationships between space and time, leading to more accurate extrapolations of radar echoes. This is crucial in precipitation forecasting, as both spatial distribution and temporal evolution play a significant role in the prediction accuracy. The experimental results from the Hebei radar dataset indicate that PFA-TransUNet achieves superior performance compared to traditional models. The model shows a substantial improvement in forecast accuracy, with lower MSE values and higher SSIM scores, indicating better preservation of the structure of radar echoes. Furthermore, the model excels in terms of CSI at different dBz thresholds, demonstrating its robustness in detecting precipitation events under various conditions. Most importantly, the model’s training speed is significantly improved due to the sparse attention mechanism, making it suitable for real-time forecasting applications. In conclusion, PFA-TransUNet presents a promising solution for radar echo extrapolation tasks, especially in the context of short-term precipitation forecasting. Its combination of sparse fusion attention and spatiotemporal modelling makes it a powerful tool for improving the accuracy and efficiency of radar-based forecasting systems.