Research on satellite-based surface solar irradiance forecasting
For this challenge, the submitted files can be heavy (more than 100Mo). Processing your submission might take a few minutes:Β be patient!
Started on Dec. 10, 2021
In the domains of Solar Energy and energy meteorology, there is a need for accurate intraday, named hereinafter short-term, solar forecasting. Indeed, short-term forecasts allow a better integration of photovoltaic (PV) systems by anticipating the variability of solar radiation in space and time. This is particularly important in electric systems with a high penetration of solar energy, where the dispatch of generation units to match electricity production and consumption at each time is particularly challenging. This need not only holds for large scale electric grid PV integration but also in the special case of off-the-grid electricity supply systems.
Geostationary satellites, notably thanks to algorithms such as Heliosat, are a source of spatially and temporally resolved of surface solar irradiance (SSI), the "fuel" of PV systems (unit: ). In the framework of Copernicus Atmospheric Monitoring Services (CAMS), the multispectral images acquired by Meteosat Second Generation (MSG) at the longitude 0Β° are used to provide, in near-real-time basis, every 15 min, images of SSI and SSI under clear-sky condition at 3 km resolution. These services, resp. CAMS Rad and CAMS McClear, are operated and maintained by Transvalor Innovation SoDA (www.soda-pro.com), in collaboration with the DLR, the German Aerospace. This source of time series of SSI images is notably used to provide short-term (up to 2 hours) solar forecasting. The state-of-the-art of such satellite-based short-term solar forecasting is based on cloud motion vector (CMV) using optical flow or block-matching techniques.
The aim of this challenge is to propose machine learning and deep learning approaches on sequences of images to provide better short-term forecast of future image of SSI on horizontal plan, noted GHI (Global Horizontal Irradiance), for time horizon ranging from 15 minutes to 1 hour, with a time resolution of 15 min and a spatial resolution of 3 km.
More precisely, we are interested in a square region of interest (RoI) of size 51 pixels x 51 pixels (approx. 150Β km). With an assumption of max cloud speed of , and considering solar forecasting up to 1 hour ahead, the observation region (OR) encompassing the RoI have a size of 81 pixels x 81 pixels (approx. 240 km).
At a given time , one hour after the sunrise and one hour before the sunset, considering the sequence of the 4 previous images on the OR every 15 min, the solar forecasting aims at predicting the images on the RoI for the next times ahead, ranging from the next 15 min up to the next hour with a time step of 15 min. This forecast of for the location , done at the time for the future time is noted:
The learning phase is done on one year of data and the test phase is done on a separate year.
In this challenge, we will only consider the cloud effects on , assuming that the concomitant and collocated under clear-sky condition (with no cloud) is perfectly known and noted .
Contextual information of interest are the corresponding solar zenith angles (SZA) , solar azimuth angle (SAA) .
Do not hesitate to refer to the full Copernicus documentation in the supplementary files, as several technical aspects of the challenge are further explained and detailed.
The training set contains 1845 samples, and the test set contains 1841 samples. Each sample represents a time at which we consider the previous images and wish to predict the next images.
Practically, the input is encoded in the numpy .npz format and consists of:
datetime the time at which we consider the 4 previous images on the OR every 15 min. This vector of length is of datetime type (YYYYMMDDHHMM).
GHI a matrix of size (, 81, 81, 4) with the sequence of the 4 previous 15-min images (of size (81,81)) on the OR for the times -45min, -30min, -15min, and .
CLS a matrix of size (, 81, 81, 8) with the sequence of the 4 previous and 4 next modelled 15-min clear-sky (i.e. with no clouds) images (of size (81,81)) on the OR, noted , for the times -45min, -30min, -15min, , +15min, +30min, +45min, +60min.
SZA a matrix of size (, 81, 81, 8) with the sequence of the 4 previous and 4 next modelled 15-min SZA (of size (81,81)) on the OR for the times -45min, -30min, -15min, , +15min, +30min, +45min, +60min.
SAA a matrix of size (, 81, 81, 8) with the sequence of the 4 previous and 4 next modelled 15-min SAA (of size (81,81)) on the OR for the times -45min, -30min, -15min, , +15min, +30min, +45min, +60min.


Note that = 1845 for the training set and = 1841 for the testing set.


To load and read the contents of a .npz file, one can use the following:
# Load the .npz file
X = np.load('filename.npz', allow_pickle=True)
# Display the contents of the .npz file
X.files
# Access the contents of the .npz file
date = X['datetime']
GHI = X['GHI']
CLS = X['CLS']
SZA = X['SZA']
SSA = X['SSA']
Note that = 1845 for the training set and = 1841 for the testing set.
The output vector represents the sequence of the 4 next 15-min images on the RoI, corresponding to a matrix of size (, 51,51,4), for the 4 future times +15min, +30min, +45min and +60min.
In this challenge, we will be providing the raw 2D format of the output vector $y$ which is a dataframe of size (, 4x51x51x4+1) = (, 10405), where the first colum of the dataframe is id_sequence
(the ids of the considered time sequence ).
In order to transform the raw 2D output to a 4D matrix format (which will be useful for displaying the various images of the output vector ), it is necessary to:
First, remove the id_sequence column.
Second, use the following transformation:
y_4D = np.transpose(np.reshape(np.array(y_raw),(-1,4,51,51)), (0, 1, 3, 2))
In order to transform the 4D output to a 2D raw format (which will be compulsory when submitting your model predictions), it is necessary to:
y_2D = np.transpose(y_4D, (0,1,3,2)).reshape(-1, 10404)
Transform the array to a dataframe.
Add an index column id_sequence.
These transformations are already implemented in the transform_output_format.py (cf. supplementary files).
The OR and the RoI are concentric: with python-like index :
RoI = OR[15:66,15:66]
Two simple forecasts methods will be provided for the benchmark:
The persistence forecasting :
This method of forecasting is used as a baseline, to compute the skill-score (SC).
The forecasting which is based on a state-of-the-art optical flow and CMV persistence.
We will be providing the persistence forecasting benchmark for the test set while the forecasting benchmark, for the test set as well, will be added as supplementary data.
The candidate is free to choose either one of these forecasting methods to benchmark his model.
Files are accessible when logged in and registered to the challenge