├── requirements_gpu.txt ├── data_mean.npy ├── data_std.npy ├── inference.py └── README.md /requirements_gpu.txt: -------------------------------------------------------------------------------- 1 | numpy 2 | onnx==1.12.0 3 | onnxruntime-gpu==1.14.0 -------------------------------------------------------------------------------- /data_mean.npy: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/OpenEarthLab/FengWu/HEAD/data_mean.npy -------------------------------------------------------------------------------- /data_std.npy: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/OpenEarthLab/FengWu/HEAD/data_std.npy -------------------------------------------------------------------------------- /inference.py: -------------------------------------------------------------------------------- 1 | import os 2 | import numpy as np 3 | import onnx 4 | import onnxruntime as ort 5 | 6 | 7 | # The directory of your input and output data 8 | input_data_dir = './input_data' 9 | output_data_dir = './output_data' 10 | model_6 = onnx.load('fengwu.onnx') 11 | 12 | # Set the behavier of onnxruntime 13 | options = ort.SessionOptions() 14 | options.enable_cpu_mem_arena=False 15 | options.enable_mem_pattern = False 16 | options.enable_mem_reuse = False 17 | # Increase the number for faster inference and more memory consumption 18 | options.intra_op_num_threads = 1 19 | 20 | # Set the behavier of cuda provider 21 | cuda_provider_options = {'arena_extend_strategy':'kSameAsRequested',} 22 | 23 | # Initialize onnxruntime session for Pangu-Weather Models 24 | ort_session_6 = ort.InferenceSession('fengwu.onnx', sess_options=options, providers=[('CUDAExecutionProvider', cuda_provider_options)]) 25 | 26 | 27 | data_mean = np.load("data_mean.npy")[:, np.newaxis, np.newaxis] 28 | data_std = np.load("data_std.npy")[:, np.newaxis, np.newaxis] 29 | 30 | input1 = np.load(os.path.join(input_data_dir, 'input1.npy')).astype(np.float32) 31 | input2 = np.load(os.path.join(input_data_dir, 'input2.npy')).astype(np.float32) 32 | # input1 = np.random.rand(69, 721, 1440) 33 | # input2 = np.random.rand(69, 721, 1440) 34 | 35 | input1_after_norm = (input1 - data_mean) / data_std 36 | input2_after_norm = (input2 - data_mean) / data_std 37 | input = np.concatenate((input1_after_norm, input2_after_norm), axis=0)[np.newaxis, :, :, :] 38 | input = input.astype(np.float32) 39 | 40 | for i in range(56): 41 | output = ort_session_6.run(None, {'input':input})[0] 42 | input = np.concatenate((input[:, 69:], output[:, :69]), axis=1) 43 | output = (output[0, :69] * data_std) + data_mean 44 | print(output.shape) 45 | # np.save(os.path.join(output_data_dir, f"output_{i}.npy"), output) #保存输出 -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # FengWu: Pushing Skillful Global Weather Forecasts beyond 10 Days Lead 2 | 3 | This repository presents the inference code and pre-trained model of FengWu, a deep learning-based weather forecasting model that pushes the skillful global weather forecasts beyond 10 days lead. The original version of FengWu has 37 vertical levels. To make it easier for real-time evaluation with operational analysis data, the pre-trained model released here accepts 13 vertical levels. 4 | If you are interested in the technique details, please refer to the arxiv version: https://arxiv.org/abs/2304.02948. 5 | 6 | If you have any questions, feel free to contact Dr. Lei Bai . 7 | 8 | ## Requirements 9 | 10 | ``` 11 | pip install -r requirement_gpu.txt 12 | ``` 13 | 14 | ## Files 15 | 16 | ```plain 17 | ├── root 18 | │ ├── input_data 19 | │ │ ├── input1.npy 20 | │ │ ├── input2.npy 21 | │ ├── output_data 22 | │ ├── fengwu_v1.onnx 23 | │ ├── fengwu_v2.onnx 24 | │ ├── inference.py 25 | │ ├── data_mean.npy 26 | │ ├── data_std.npy 27 | ``` 28 | 29 | ## Downloading trained models 30 | 31 | Fengwu without transfer learning (fengwu_v1.onnx): [Onedrive(https://pjlab-my.sharepoint.cn/:u:/g/personal/chenkang_pjlab_org_cn/EVA6V_Qkp6JHgXwAKxXIzDsBPIddo5RgDtGCBQ-sQbMmwg)] 32 | 33 | 34 | Fengwu with transfer learning (fengwu_v2.onnx, finetune the model with analysis data up to 2021): [Onedrive(https://pjlab-my.sharepoint.cn/:u:/g/personal/chenkang_pjlab_org_cn/EZkFM7nQcEtBve6MsqlWaeIB_lmpa__hX0I8QYOPzf-X6A)] 35 | 36 | 37 | ## Data Format 38 | 39 | The model takes two consecutive six-hour data frames as input. input1.npy represents the atmospheric data at the first time moment, while input2.npy represents the atmospheric data six hours later. For example, if input1.npy represents the atmospheric state at 6:00 AM on January 1, 2018, then input2.npy represents the atmospheric state at 12:00 PM on the same day. The first predicted data corresponds to the atmospheric state at 6:00 PM on January 1, 2018, and the second predicted data corresponds to the atmospheric state at 12:00 AM on January 2, 2018. 40 | 41 | The data is organized in the following order: Each individual data has a shape of 69x721x1440, where 69 represents 69 atmospheric features. The latitude range is the [90N, 90S], and the longitude range is [0, 360]. The first four variables are surface variables in the order of ['u10', 'v10', 't2m', 'msl'], followed by non-surface variables in the order of ['z', 'q', 'u', 'v', 't']. Each data has 13 levels, which are ordered as [50, 100, 150, 200, 250, 300, 400, 500, 600, 700, 850, 925, 1000]. Therefore, the order of the 69 variables is [u10, v10, t2m, msl, z50, z100, ..., z1000, q50, q100, ..., q1000, t50, t100, ..., t1000]. 42 | 43 | Data instance download address: (data): [https://drive.google.com/drive/folders/11i_l-mEQ7K5OcfbZd9jeBpfr_BGen9M0?usp=drive_link] 44 | 45 |

46 | ⚠️ ATTENTION ⚠️

47 | Use fengwu_v1 for ERA5 evaluation.
48 | fengwu_v1.onnx模型仅用于ERA5数据集评估
49 | Use fengwu_v2 for Operational Analysis data evaluation.
50 | fengwu_v2.onnx模型仅用于实时预报Operational Analysis数据集评估 51 |

52 | --------------------------------------------------------------------------------