2. 1.1. Related work
In recent years, several studies have compared the performance of
methods to forecast PV output using statistical and Artificial Neural
Network (ANN) predictors. Overall these studies show that ANN based
methods produce a better forecast than conventional mathematical or
regression predictors (Almonacid et al., 2014; Oudjana and Mahamed,
2013), largely due to their capability to deal with uncertain PV output
and solar irradiance by iterative learning processes and dynamic be-
havior. In a comprehensive study (Pedro and Coimbra, 2012), Pedro
and Coimbra used data from a 1 MW PV plant in California to compare
the PV output forecast performance of five predictors: (1) the k-Nearest-
Neighbor, (2) Persistent predictor, (3) the Autoregressive Integrated
Moving Average (ARIMA), (4) ANNs, and (5) a hybrid Genetic algo-
rithm based ANN predictor. They found that the ANN based predictor
produced less forecast error, up to RMSE 11.42 for a 1-h ahead forecast.
Further research is needed, however, to determine if forecast accuracy
can be enhanced further by optimizing the ANN learning and going
beyond the use of single ANN networks.
One of the key advantages of ANN based predictors of PV output is
its ability to handle dynamic, nonlinear, and abrupt meteorological
variations. In Chen et al. (2011), authors designed a radial basis func-
tion network (RBFN) based PV output forecast predictor for 24 h ahead.
In this paper, to classify the different input variables such as air tem-
perature, wind speed, humidity, and average of solar irradiance, self-
organized map techniques were used. In Tymvios et al. (2005), seven
ANN predictors were designed and trained with measured sunshine
duration, theoretical sunshine duration, month and daily maximum
temperature at site in Cyprus. These ANN predictors demonstrated that
the best forecast results were achieved when the above-mentioned
variables were treated as inputs of the predictor. In another research
study, feedforward neural network (FNN) and generalized regression
neural network (GRNN) based predictors were used to forecast the PV
output. Minimum, maximum and mean temperature along with solar
irradiance were used as forecast predictor inputs. The PV output fore-
cast results demonstrated that the FNN predictor outperformed the
GRNN. In Izgi et al. (2012), authors designed a methodology to enhance
the forecast accuracy of 750 kW PV plant by improving the training
capability of ANN. The forecast predictor inputs were global and diffuse
solar irradiance with cell and ambient temperature applied as ANN
predictor input. Several artificial intelligent forecast predictors have
been designed to forecast not only PV output but also solar irradiance,
load demand, wind power and electricity price (Yang and et al., 2014;
Liu et al., 2014; Chitsaz et al., 2015; Anbazhagan and Kumarappan,
2014). In addition, to applications in conventional time series analysis,
NNs are of increasing research interest due to numerous advances in
predictor performance and suitability for a diverse range of predictions,
optimization, and classification problems.
The output performance of NNs are greatly affected by both the
learning phase of the network and the predictor inputs. The perfor-
mance of PV output power forecast predictors also vary with change in
evaluation metrics, as well as the location and quality of the data used
for training. In addition, a poor learning phase in the NN development
means that it is less able to be generalized (Gao et al., 2005). In sum-
mary, these findings support the use of an NN ensemble approach, since
the limitations of a standalone NN predictor mean it may not outper-
form other forecast predictors across the full range of forecast appli-
cations (Burnham and Anderson, 2002).
1.2. Neural network ensemble
An NN ensemble method uses the output of multiple NN predictors
to generate a more accurate output rather than relying on a single NN
predictor. Specifically, within the ensemble framework, multiple NNs
that represent different predictors are created and then trained in-
dividually, with their outputs combined to generate a final output for
the NN ensemble. A number of studies have applied NN ensemble
techniques successfully in various applications (Gao et al., 2005;
Burnham and Anderson, 2002; Li et al., 2011), including improved
wind speed forecast results were compared with conventional methods
(Li et al., 2011). In our previous work (Raza et al., 2017), an ensemble
forecast framework was designed using backpropagation neural net-
work (BPNN), Elman neural network (EN), Autoregressive Integrated
Moving Average (ARIMA), feed forward neural network (FNN), radial
basis function (RBF) and their wavelet transform (WT) models. Using
the proposed load demand forecast framework, day ahead from dif-
ferent seasons and month ahead forecast case studies were designed.
The results were compare the benchmark models. Due to high success,
the proposed idea was expended to solar output power forecast and
different architectures and predictors was tried. Then after several ex-
periments and comprehensive analysis, a novel ensemble forecast is
proposed based on FNN, ELM, and NewCF networks. In addition, these
neural predictors are trained with three learning techniques, namely
the Levenberg–Marquardt (LM), Scaled conjugate gradient back-
propagation (SCG), and backpropagation (BP) algorithm to achieve
diverse and accurate output from each predictor in ensemble
Nomenclature
FNN Feedforward Neural Network
ELM Elman Backpropagation Network
Newcf Cascade-Forward Backpropagation
NN Neural Network
BMA Bayesian Model Averaging
LM Levenberg–Marquardt
SCG Scaled Conjugate Gradient Backpropagation
BP Backpropagation
xmax maximum value
xmin minimum value
x transformed series
MAPE mean absolute percentage error
ψ t( ) mother wavelet signal
ϕ t( ) scaling function
j scaling integer variable
k translating integer variable
s t( ) signal for wavelet transformation
φj k, scaling function
ψj k, wavelet function
d k( )j detail coefficient
Cj0
approximations coefficient
Jo pre-scaling coefficient
A approximation coefficient
D detail coefficients
NN Struc neural network structure
n structure of NN,
NNtotal total number of NN
C constant to variate the number of neurons
in the hidden layers
b coefficients/weight of BMA model
y PV output
= DP (M | )j posterior distributions
= DP (y|M , )j weighted posterior probabilities
D training data
σj
2
variance
YBMA output of BMA
R2 coefficient of determination
M.Q. Raza et al. Solar Energy 166 (2018) 226–241
227
3. framework. However, it was observed during the experimentation, the
ensemble forecast results were different for load demand and solar PV
output power forecast. This suggests that the NN ensemble framework
may be a useful approach for PV forecasting.
The contributions of the proposed novel ensemble approach are
highlighted as follows:
(1) Development and integration of the FNN, ELM, and NewCF pre-
dictors, along with their wavelet transformed (WT) predictors.
(2) Each of the predictors was trained with different training algorithm
namely the Levenberg–Marquardt (LM), Scaled conjugate gradient
backpropagation (SCG), and backpropagation (BP) algorithm to
improve the predictors performance.
(3) Each Predicator output is aggregated using Bayesian model aver-
aging for efficient selection and contribution of each of them in
final PV output forecast results.
(4) The WT historical solar output data, combined with meteorological
variables such as solar irradiance (SI), wind speed (Ws), tempera-
ture (T), and humidity (H), are used to train the multivariate neural
predictors.
(5) Improvement in average forecast accuracy of proposed ensemble
framework up to 27% in seasonal one day ahead forecast in com-
parison of benchmark predictor.
Rest of the paper is organised as follows: Section 2 describes data
pre-processing, normalization, neural predictors, characteristics of PV
output power profile and wavelet transform. Then Section 3 outlines
the selection of inputs of the multivariate predictors, methodology of
proposed framework, Bayesian model averaging, and NN parameters.
Results and discussions are presented in Section 4. The major conclu-
sions and contributions of the paper are highlighted in Section 5.
2. Ensemble predictors and data preparation
This section demonstrates the data preprocessing, normalization,
architecture of neural predictors and factors affecting on PV output
power.
2.1. Data pre-processing and splitting
The prediction accuracy of the NN ensemble framework is depen-
dent on the training quality of each individual predictor. Highly cor-
related inputs in the form of smooth time series data and pre-processing
techniques that increase the quality of the data will improve NN
training and enhance the forecast accuracy. This should also result in
improved generalization and a higher convergence rate for the NNs.
Therefore, wavelet transform (WT) technique was applied on historical
input data. Post-processing also needs to be applied to generate the
output of the network.
2.2. Data normalization
A large variation in the historical PV output data is observed in
response to variations in the meteorological factors, This can adversely
impact NN training and the performance of forecast framework can be
enhanced by applying data normalization as given in Eq. (1) (Jin et al.,
2014).
=
−
−
x
x x
x x
i min
max min (1)
where xi original PV is output data at specific interval, xmax xmin are the
maximum and minimum value of PV output data respectively and x is
transformed series of PV output data. The mean absolute percentage
error (MAPE) is used as objective function in the training process of the
neural network. The objective of the optimization is to minimize the
error up to certain threshold values within a fixed number of iterations.
2.3. Architecture of neural network
NN consists of different layers, such as input, hidden, and output
layer, with each layer made up of one or more interconnected small
processing units called ‘neurons’. This network of interconnected neu-
rons explores multiple competing hypotheses via massive parallel pro-
cessing. ANN based frameworks have been successfully implemented in
several fields of daily life, such as bio-medical applications, the aero-
space, automotive, and electronics industries, and in financial services
(Kalogirou, 2001; kannan et al., 1831; Alexander et al., 2007; Corazza
et al., 2014). Based on this, three popular NN architectures are included
in the ensemble framework for PV forecasting, specifically: FNN, ELM
and NewCF.
In a FNN, the input information of the network proceeds in forward
direction only. The input data of the NN is applied to the input layer
and passed to the output layer through a hidden layer as shown in
Fig. 1. There is no backward path for the information as occurs with
ELM. In this study, the time series meteorological and PV output data
are used as inputs for the network at time t, while the predicted PV
output is received at the output layer.
The ELM is considered as a type of recurrent neural network, where
connections between units form a directed cycle. In ELM networks, a
special copy of the hidden layer is connected through linking paths as
depicted in Fig. 2. Therefore, the training process depends on three
processes, namely previous state, current inputs, and network output.
The standard backpropagation algorithm can also be used for training
of the neural network as the special layer is treated as another set of
inputs (Khatib et al., 2012; Singh et al., 2015).
The NewCF network is similar to the FFN, where the connection
weights of the network are included from each layer to the successive
layer as illustrated in Fig. 3. A recent study has found that the NewCF
38
1
2
20
1
2
3
4
8
Input
Layer
Hidden
Layer
Output
Layer
Neural Network
Architecture
Yt-n+1
Yt
Yt-1
Yt-2
Yt-3
PV Output
Fig. 1. Feed forward neural network (FNN) architecture.
Fig. 2. Elman neural network architecture (Zhou et al., 2013).
M.Q. Raza et al. Solar Energy 166 (2018) 226–241
228
4. outperforms the FFN in forecast accuracy in some case studies (Filik and
Kurban, 2007). Specifications for the other learning parameters of FFN,
ELM, and NewCF networks are given in Table 1.
In this study, the historical PV output and meteorological data are
applied as forecast framework inputs. The learning algorithm tries to
reduce the training error between network output and target values
through iteration.
Fig. 4 highlights the steps involved in design of the PV output
forecast framework. Input selection is one of most critical parts in the
design process as the forecast accuracy varies with change in frame-
work inputs. It is due to the large dependence of PV output on different
meteorological variables. In this research, correlation analysis and
partial auto correlation techniques are used to select the influential
variables and historical lag values. Secondly, a pre-processing tech-
nique is applied on historical PV output data as they contain several
missing data points, sharp peaks, and variations.
In this study, WT was applied on the historical PV output data for
data smoothing and to improve forecast accuracy. The WT processed
historical data are divided into two groups, named as training and
testing data. The training data of the forecast framework are used for
learning of network to predict the future values. Testing data is used to
analyze the performance of the forecast framework. At a next stage,
neural network (NN) ensemble architecture is initialized for PV output
forecast problem. The network is trained using training and forecast
framework input data. Finally, the output of best NN is combined using
aggregation technique. The forecast accuracy of the framework will be
compared with other implemented prediction frameworks.
2.4. Solar data and variables affecting on PV output
The UQ has a total PV of capacity of 5.919 megawatt (MWp) in-
stalled at various campus sites. This study uses PV data from two sites of
fixed tilt arrays (see technical specifications in Table 2): Site 1 at the UQ
Centre rooftop array with 433 kW installed capacity is shown in Fig. 5
and Site 2 at the UQ Gatton campus with the capacity of 2077.65 kW is
shown in Fig. 6. UQ solar online data management system records the
real-time PV output power.
The nearby weather station data includes measurements of hu-
midity, air temperature, speed and direction of wind at 1-min intervals
(or 1440 data points per day). Since the solar data is only available
during the hours of sunlight from 5 AM to 7 PM (or 841 data points), the
Historical PV
output
Air Temperature
Wind Speed
Humidity
ANN
model
PV
Output
Forecast
Input Layer Hidden Layer Output Layer
Fig. 3. Cascade-forward back-propagation PV output forecast framework (Filik and Kurban, 2007).
Table 1
Neural network architecture parameters.
Parameters Feed Forward Network (FNN) Elman Backpropagation Network (ELM) Cascade-Forward Backpropagation Network (NewCF)
Layers (input, hidden and output) 3 3 3
Epochs 1000 800 700
MSE target of NN ∗ −e1 3 ∗ −e1 6 ∗ −e1 6
Activation function of hidden layer Tangent-sigmoid Tangent-sigmoid Tangent-sigmoid
Activation function of output layer Linear Linear Linear
Number of input neuron 8 8 8
Number of output neuron 1 1 1
Performance measure Forecast error Forecast error Forecast error
Selection Of Forecast Model Inputs
Pre-processing of Input Data for
Forecast Models
Split Data Into Testing And Training
Sets
Initialize and Train NN Ensemble
Models
Integration Of NN Models
Calculate And Compare Forecast Error
Of Different Models
Fig. 4. Neural network based forecast framework steps.
M.Q. Raza et al. Solar Energy 166 (2018) 226–241
229
5. values for both meteorological variables and PV output were deleted
outside of these times. The PV output power profile is depicted in Fig. 7.
It follows a clear seasonal variation for the Southern Hemisphere with a
brief central dip during the mid-year months and maximum levels
power output occurring in January and December (at either end of the
graph). As expected, the variation in PV output power largely reflects
variations in solar radiation incident on the PV panels.
The total daily solar radiation reaching the area around Brisbane
varies from 22 to 27 MJ/m2
in December to 4–9 MJ/m2
in June
(Meteorology, 2009). The PV output power follow the almost similar
pattern like solar radiations as discussed in later this section. Therefore,
the PV output power is sensitive to seasonal variations. In Queensland,
the seasonal variation in solar radiation and electricity demand are also
correlated. Generally, the solar power is also increased during the
summer season along with electricity demand. It is observed that the PV
output power is variable due to change in meteorological variables.
Therefore, theses meteorological variables are considered as input to
multivariable ensemble forecast frameworks.
Other studies also indicate that, temperature and wind speed have
impact on PV output power (Oudjana and Mahamed, 2013). The
comparison of these two variables is shown in Fig. 8. The comparison
suggests that this positive correlation is only partially evident at certain
point of times and varies with change in time. Nevertheless, air tem-
perature is included as an input for the forecast framework for the
prediction of PV power output.
Wind speed in known to potentially impact the power output via
heat dissipation from PV cells, where high cell temperatures can ad-
versely impact conversion efficiency. However, as depicted in Fig. 9,
weak correlation is observed between wind speed and PV power output
at UQ centre. The wind speed plays a role in heat dissipation. As a
result, PV cell temperature is decreased from higher to lower level.
Therefore, PV output power is also reduced with lower cell tempera-
ture. However, relatively a weak correlation is observed between PV
output power and wind speed as compared to air temperature.
The correlation between different PV output power and different
meteorological variables are given in Table 3. Noted that, these corre-
lation value is calculated on the basis of available data set but it can be
varied with change of site, PV modules and other factors.
2.5. Wavelet transform
Historical PV output data contains various oscillations, spikes, and
different types of non-stationary data components due to the sudden
change in meteorological and exogenous variables. One method to
smoothing the historical PV output data is WT and this will improve the
training of each of the NNs in ensemble framework. The WT technique
decomposes the historical PV output data into a series of constitutive
components that demonstrate more stable behavior Wavelet transform
can be divided into two basic functions named as mother wavelet signal
ψ t( ) and scaling function ϕ t( ). The series of functions can be defined as
given in Eqs. (2) and (3) (Li et al., 2015):
= ∗ −φ t φ k( ) 2 (2 )j k
j j
,
/2 /2
(2)
= ∗ −ψ t ψ k( ) 2 (2 )j k
j j
,
/2 /2
(3)
where the scaling and translating integer variables are j and k, re-
spectively. The signal s t( ) can be expressed by using the scaling φj k, and
wavelet function ψj k, as given in Eq. (4):
∑ ∑ ∑= − + −
=
∞
S t C k φ t k d k ψ t k( ) ( )2 (2 ) ( )2 (2 )
k
j
j j
k j j
j
j j/2 /2
0
0 0
0 (4)
where d k( )j and Cj0
represents the detail and approximations coeffi-
cients respectively. The pre-scaling coefficient is represented by Jo in
the above signal equation. The first term of signal S t( ) at the predefined
scale Jo gives the low resolution. The second term of S t( ) at the pre-
defined scale Jo elaborates the higher resolution component.
lefttopMallat’s multiresolution analysis or Mallat’s algorithm is a
technique used to implement the WT using decomposition and re-
construction filters (Catalao et al., 2011). Therefore, at the first stage
the original can be decomposed into different components using low
and high pass filters as shown in Fig. 10. Findings from previous studies
that have examined the use of WT (Li et al., 2015) Mandal et al. (2013)
suggest that three levels of WT decomposition result in better forecast
accuracy. Therefore, the three-level decomposition is applied to the
historical PV output data in this study, but not to the meteorological
variables.
The detail or the high frequency coefficients D1, D2 and D3 are
Table 2
Technical specification of PV sites in the study.
Parameters UQ centre solar UQ Gatton campus
Capacity 433.44 kW 2077.65 kW
Site longitude 153°00′54.8″E 152°20′14.1″E
Site latitude 27°29′45″S 27°33′41.5″S
Height above sea
level
Height above sea level:
28 m
Height above sea level: 88 m
Type of installation Rooftop installation
(elevated)
Ground mounted
Tracking system No tracking system No tracking system
Orientation and tilt 3° and 110° (lower/south
roof)
−3° and 20° (Perimeter)
6° and 20° (Core)
3° West of North
Module technology Polycrystalline silicon Cadmium telluride
Module size 1650 × 992 mm 1200 × 600 mm
Number of modules 1806 21,600 (3 × 7200)
Number of inverters 32 (31 × 12.5 and
1 × 5000)
3 × 720 kWp capped at
630 kWp output
Fig. 5. UQ centre PV array (Li and Shi, 2010).
Fig. 6. UQ Gatton campus fixed tilt array (Li and Shi, 2010).
M.Q. Raza et al. Solar Energy 166 (2018) 226–241
230
6. obtained with high pass (H.P.) filters. The approximate signal (low
frequency components) A3 can be obtained by down sampling with low
pass filters. The individual-decomposed historical energy consumption
data signals (A3, D1, D2 and D3) are applied as forecast framework in-
puts along with meteorological data. The output of individual forecast
NN ensembles can be composed to generate the output. The re-
construction process on approximation A3 and detail coefficients D1 ,
D2 and D3 are applied to generate the forecast results. In this research,
Daubechies type function of order four is used as mother wavelet as, as
described in Catalao et al. (2011).
3. Methodology
This section demonstrates the inputs to the forecast framework,
proposed ensemble framework and Bayesian model averaging.
3.1. Inputs of forecast framework inputs
Since the forecast accuracy of each ANN is affected by the selection
of both the number and type of input data used for training (Sfetsos and
Coonick, 2000), this research study uses a two-step selection process.
First the forecast framework is trained using all the potential input
Fig. 7. Solar output power profile of year 2014.
Fig. 8. The per unit curves of PV output power and air temperature.
M.Q. Raza et al. Solar Energy 166 (2018) 226–241
231
7. variables to achieve minimum error; second, the less influential input
variables were removed by replacing with the mean values or zero.
Previous studies (Marquez and Coimbra, 2013) suggest that increasing
the number meteorological variables (eight are used in Duan et al.
(2007)) as data inputs for training ANNs can reduce the prediction
error. However, from a practical perspective large input set for an ANN
Fig. 9. The per unit curves of PV output power and wind speed.
Table 3
Correlation between PV output and meteorological variables.
Parameters Correlation with PV output (R value)
Solar irradiance 0.9558
PV module temperature 0.4945
Humidity 0.4235
Wind speed 0.3354
Fig. 10. Wavelet transformation process of signal.
Fig. 11. PV output forecast framework inputs.
M.Q. Raza et al. Solar Energy 166 (2018) 226–241
232
8. predictor also lead to increased computational complexity, longer
training data requirement, slow convergence, and increased computa-
tional time. As a result, these forecast predictors are not suitable for
online application where forecast may be required within shorter
timeframe. Hence, the emphasis in this study on selecting the most
influential input variables based on correlation analysis. Partial auto-
correlation function was also used to investigate lagged values as part
of the selection process. As a result, historical PV output, solar irra-
diation, air temperature, wind speed, and humidity were selected as
prediction framework inputs as shown in Fig. 11.
3.2. Proposed neural network based ensemble framework
The structure of the NN ensemble, with the FNN, ELM, and NewCF
networks, used in this forecast framework is outlined in Fig. 12.
Different stages of forecast framework are highlighted below.
1. The historical PV output data and weather variables are pre-pro-
cessed, then the neural network ensemble network is initialized.
2. The historical PV output data and meteorological variables are ap-
plied to train each of the NNs in the ensemble framework.
3. Since the output performance of the network may vary with the
variation of NN predictors combination and number of NN struc-
tures (Hassan et al., 2015) various numbers of NNs in each ensemble
were tested in order to analyze the performance of forecast frame-
work. The seven structures of ANN predictors were initialized with
105 neural predictors in ensemble framework and each structure
containing 15 NNs as given in Eq. (5).
∑= …
=
NNStruc n n n n{ , , , , }
n 1
7
1 2 3 15
(5)
4. To achieve the diverse forecast output, seven structure NNs are
created with 9, 11, 15, 17 and 20 hidden layer neurons. The number
of neurons in each hidden layers are same in within each NN en-
semble but are different across NN ensembles. After that, each
network Mt i, ( = …i N{1,2,3, , }total ) will generate an output Zt i, for
specific forecast horizon of time.
The neural network weight bias matrix initialized with random
values as per initialization function in Matlab (Simulation Tool). The
number of neurons in input layers depend on the network inputs.
Input layers contain five neurons which process the input informa-
tion to hidden layer of network.
5. An established feature of NNs is that the fitness of the network varies
the number of hidden layer neurons, while the performance of each
NN predictor in an ensemble network varies due to the random
initialization of network parameters. This enables an ensemble fra-
mework to explore the solution space for a diverse range of solution
for prediction problem. In this study, different hidden neurons are
selected in different NN ensemble structures. However, the hidden
layer neurons are same in each NN structure as described in Eq. (6).
= ⎡
⎣
∗ ∗ ∗ ∗ … ∗ ∗ ⎤
⎦
A c n
NN
c n
NN
c n
NN
7
,
7
, ,
7
total total total
1 1 2 2 7 7
(6)
where n, NNtotal and c represents the structure of NN, total number
of NN and constant to variate the number of neurons in the hidden
layers.
1
38
1
2
20
1
1
2
3
4
8
0 5 10 15 20 25
0.8
0.9
1
1.1
1.2
1.3
1.4
1.5
1.6
x 10
4
Hour(H)
LoadDemand(MW)
24 hoursahead load forecast of Memorial day
Actual Load
BP
LM
PSO
0 5 10 15 20 25
0.8
0.9
1
1.1
1.2
1.3
1.4
1.5
1.6
x 10
4
Hour(H)
LoadDemand(MW)
24 hoursahead load forecast of Memorial day
Actual Load
BP
LM
PSO
0 5 10 15 20 25
0.8
0.9
1
1.1
1.2
1.3
1.4
1.5
1.6
x 10
4
Hour(H)
LoadDemand(MW)
24 hoursahead load forecast of Memorial day
Actual Load
BP
LM
PSO
Christmas day Easter day Labor day Memorial day New Year day
0
1
2
3
4
5
6
7
Anomalousday's
MAPE(%)
BP Forecast Model
LM Forecast Model
PSO Forecast Model
Maximum error
Minimum error
1
38
1
2
20
1
1
2
3
4
8
38
1
2
20
1
1
2
3
4
8
Bayesian Model
Aaveraging
Fig. 12. Proposed neural network ensemble based PV output forecast framework.
M.Q. Raza et al. Solar Energy 166 (2018) 226–241
233
9. 6. A diverse output is achieved from each NN structure and combined
using aggregation technique for better forecast results.
3.3. Bayesian model averaging
In this study, the output of each network ensemble is combined or
aggregated using a Bayesian model averaging (BMA) technique. This
statistical procedure, also known as the BMA combination algorithm, is
used to infer consensus in results of different NN predictors and com-
bine them. Findings from previous studies indicate that BMA is useful
for a range of applications and generates more adaptive and reliable
predictions than other aggregation techniques (Li and Shi, 2010;
Raftery et al., 2005; Duan et al., 2007; Wasserman, 2000). Recently,
BMA techniques have also been used successfully to aggregate the
output of neural network ensembles for a range of forecasting
applications (Magnus et al., 2010; Montgomery and Nyhan, 2010).The
BMA assigns weights to each individual network based on the posterior
predictor probabilities. The weighting values of any individual NN
predictor are based on the network performance. Higher weight values
are assigned to better performing forecast predictors as compared to
lower performing ones (Li and Shi, 2010; Wasserman, 2000). Let b will
represent the coefficients/weight of BMA. Let b will represent the
coefficients/weight of BMA model. The j number of models in M model
space as = …Mj J(1,2,3, , ) and these are predicting the PV output y. Fj is the
output of forecast model j and D denotes the training data of each
network (Hoeting et al., 1999). The average of posterior distributions of
each model is = DP (M | )j , weighted by their posterior probabilities
= DP (y|M , )j for probability density function. The calculation of the
BMA probabilistic forecast are given in Eq. (7).
∑= ∗
=
p y D D( | ) w p(y|M , )
j
j
1
j j
(7)
The BMA forecast posterior mean and variance can be calculated
according to Eqs. (8) and (9).
∑ ∑= ∗ = ∗
= =
E y D D D wj fj[ | ] p(M | ) E[y|M , ]
j
j
j
j
1
j j
1 (8)
∑ ∑ ∑=
⎛
⎝
⎜ −
⎞
⎠
⎟ + ∗
= = =
Var y D Wj fj wifi wj σ[ | ]
j
j
i
j
j
j
j
1 1
2
1
2
(9)
For training data D, the variance associated with model prediction fj
is σj
2
. The posterior probability of the jth model is = Dw p(f | )ij for
steady observations. The BMA aggregation is formed from the weighted
average of each predictor with its corresponding posterior predictor
probability.
The overall forecast output is basically a combination of different
predictors output, with the calculation of posterior predictor prob-
ability being a key part of BMA combination process. The estimated
weight coefficients and errors are put into a matrix called b, and then
multiplied with individual predictors output. The Youtput is the output of
each NN forecast model. BMA output can be calculated as given in Eq.
(11).
= ∗Y Y bBMA output (10)
Table 4
Forecast performance of FNN using MAPE for each learning method for Sites 1 and 2.
Forecast test day FNN_LM FNN_SCG FNN_BP WT_FNN_LM WT_FNN_SCG WT_FNN_BP Persistence
UQ St. Lucia Solar PV: Site 1
Day 1 4.87 8.46 13.88 4.17 7.44 12.07 12.95
Day 2 5.46 5.47 11.56 4.90 5.79 11.32 12.01
Day 3 5.98 7.75 12.88 5.40 6.97 12.41 11.54
Day 4 8.75 6.97 11.98 7.77 6.69 11.12 10.95
Day 5 7.07 7.85 12.45 6.31 7.41 11.86 11.13
Day 6 6.30 9.39 15.21 5.22 8.58 13.47 14.24
Day 7 6.83 6.51 12.69 6.01 6.74 12.05 12.76
Day 8 7.00 9.09 14.28 6.65 8.42 13.16 12.79
Day 9 9.84 8.23 13.30 8.48 7.95 12.38 11.80
Day 10 8.31 9.05 13.47 7.25 8.91 13.25 12.29
UQ Gatton Solar PV: Site 2
Day 1 8.90 9.87 15.87 7.63 9.42 15.07 13.85
Day 2 9.89 6.65 13.36 7.77 6.15 12.32 13.87
Day 3 5.46 8.86 12.46 5.40 8.17 11.41 11.26
Day 4 7.76 7.85 14.75 6.13 7.29 14.12 12.84
Day 5 5.15 8.87 16.25 5.31 8.22 15.86 13.24
Day 6 9.42 10.78 16.54 8.53 10.02 15.95 14.94
Day 7 10.67 7.47 14.28 8.81 6.88 13.30 14.46
Day 8 6.01 9.47 13.02 6.31 9.14 12.43 12.31
Day 9 8.75 8.78 15.68 6.99 8.23 14.83 13.76
Day 10 5.96 9.76 17.05 6.29 8.74 16.50 13.97
Table 5
MAPE comparison of ELM framework with different learning techniques for
Sites 1 and 2.
Forecast
test day
ELM _LM ELM _SCG ELM _BP WT_
ELM
_LM
WT_
ELM
_SCG
WT_
ELM
_BP
Persistence
UQ St. Lucia Solar PV: Site 1
Day 1 5.46 8.95 11.90 5.24 7.50 11.07 12.95
Day 2 5.65 5.98 11.85 5.36 5.95 11.59 12.01
Day 3 5.95 6.98 13.22 5.49 6.87 12.41 11.54
Day 4 8.48 7.65 12.25 8.42 6.82 11.93 10.95
Day 5 6.37 7.48 12.65 6.63 7.73 11.56 11.13
Day 6 4.71 7.80 11.30 3.34 6.24 9.29 12.71
Day 7 5.18 5.00 11.20 4.81 4.42 10.36 10.79
Day 8 5.94 6.58 12.25 4.57 6.54 11.83 10.99
Day 9 7.35 6.53 10.94 7.14 5.07 11.82 9.14
Day 10 5.36 5.79 12.45 4.94 6.58 10.43 10.37
UQ Gatton Solar PV: Site 2
Day 1 9.87 9.76 15.85 8.14 8.89 15.64 13.85
Day 2 7.01 8.70 13.75 6.23 8.11 13.09 13.87
Day 3 6.99 8.75 14.75 5.99 7.98 13.48 11.26
Day 4 8.65 7.68 16.70 7.13 6.78 15.98 12.84
Day 5 5.99 8.88 13.74 5.72 8.65 12.78 13.24
Day 6 8.95 9.46 14.94 7.57 7.94 14.35 13.72
Day 7 6.91 7.25 13.32 5.23 6.18 12.49 12.82
Day 8 6.03 8.24 12.94 4.39 6.40 12.17 10.17
Day 9 6.82 6.38 15.80 7.12 6.64 15.32 11.32
Day 10 4.65 8.01 12.12 4.41 7.20 11.39 11.68
M.Q. Raza et al. Solar Energy 166 (2018) 226–241
234
10. 3.4. Neural network parameters
A large number of trials were needed for selection of optimal NN
parameters to achieve a higher forecast accuracy during design and
learning process. In this research, the proposed NN ensemble based
forecast framework contains three layers named as the input, hidden,
and output. In addition, the Linear and Tangent sigmoid activation
function of output and hidden layer are used to support the NN learning
of the nonlinear relationship between PV output and weather variables.
Other parameters such as learning and momentum rate and number
hidden layer neurons are set as default values at time of NN in-
itialization, with MAPE used to evaluate performance during the NN
learning process.
Each of the three types of neural network FNN, ELM, and NewCF
networks are trained with three learning techniques, namely the
Levenberg–Marquardt (LM), Scaled conjugate gradient back-
propagation (SCG), and backpropagation (BP) algorithm. The objective
function used to measure the performance of the learning technique is
to reduce learning error (the different between output and target values
as evaluated by MAPE) of the network down to a certain threshold
level. The learning technique adjusts the weight and bias values of the
NN to minimise the MAPE using the training data in an iterative pro-
cess.
To evaluate the prediction performance of each NN selected after
training, five days are selected from different solar PV sites and seasons
to forecast the 24 h ahead (one day) for PV power output for Sites 1 and
Table 6
MAPE comparison of NewCF with different learning techniques for Sites 1 and 2.
Forecast test day NewCF_LM NewCF_SCG NewCF _BP WT_ NewCF _LM WT_ NewCF _SCG WT_ NewCF _BP Persistence
UQ St. Lucia Solar PV: Site 1
Day 1 5.85 7.85 11.29 5.67 6.29 10.21 12.95
Day 2 3.02 5.45 11.87 2.95 4.12 11.35 12.01
Day 3 4.45 3.87 11.35 3.47 2.65 10.62 11.54
Day 4 4.75 3.57 10.14 3.07 3.73 9.75 10.95
Day 5 3.98 5.75 11.66 2.56 4.43 10.51 11.13
Day 6 5.24 7.33 10.49 4.96 5.48 9.57 12.31
Day 7 2.32 4.94 11.15 2.11 3.51 10.58 11.43
Day 8 3.61 3.24 10.72 2.74 1.85 9.80 10.87
Day 9 4.23 2.85 9.30 2.18 3.11 8.86 10.41
Day 10 3.42 5.13 10.92 1.94 3.58 9.93 10.50
UQ Gatton Solar PV: Site 2
Day 1 7.57 6.87 12.85 6.56 6.91 12.13 13.85
Day 2 4.37 5.14 10.37 3.65 4.82 9.55 13.87
Day 3 5.45 3.37 11.48 3.32 3.16 11.87 11.26
Day 4 5.37 4.46 10.99 4.65 3.91 10.35 12.84
Day 5 4.45 4.92 11.96 3.88 4.37 11.75 13.24
Day 6 6.75 6.03 12.13 5.87 6.29 11.34 13.11
Day 7 3.47 4.38 9.65 3.00 4.07 8.89 13.03
Day 8 4.79 2.61 10.76 2.58 2.47 11.26 10.40
Day 9 4.70 3.66 10.24 3.88 3.11 9.72 12.15
Day 10 3.71 4.23 11.07 3.26 3.48 10.94 12.63
Fig. 13. Forecast error comparison for clear day of Site 2.
M.Q. Raza et al. Solar Energy 166 (2018) 226–241
235
11. 2. These performance tests are benchmarked against a persistence model
that predicts the PV power output of the test day as simply being a
repeat of the output values obtained from the previous day (Chu et al.,
2015).
4. Results and discussion
This section describes PV output power forecast case studies of Site
1 and Site 2.
4.1. PV output forecast case studies
In this study 2014 and 2015 data are used to train, test and validate
the forecast model. The data is split into 70%, 20%, and 10% and used
for training, validation and testing of each neural ensemble forecast
framework. Table 4 provides the comparative 24 h ahead performance
Fig. 14. Forecast error comparison for partially cloudy day of Site 2.
Fig. 15. Forecast error comparison for cloudy day of Site 2.
M.Q. Raza et al. Solar Energy 166 (2018) 226–241
236
12. results of the FNN based PV output forecasts for the two sites and ten
test days. It also evaluates the use of LM, SCG, and BP learning tech-
niques and of WT (applied to the output data). Overall for this FNN
ensemble, it shows that the combination of LM and WT has the best
performance. Specifically, on four out of five test days for Site 1 the
WT_FNN_LM produced lower forecast error than FNN with SCG and WT
(WT_FNN_SCG) and BP with WT (WT_FNN_BP), with the lowest MAPE of
4.17% is achieved for test day 1. On test day 4, however, WT_FNN_LM
had 7.77% forecast error, compared with WT_FNN_SCG with 6.69%.
Regarding the use of WT, across all test days both WT_FNN_SCG and
Fig. 16. Regression plot of: (a) WT_FNN_LM for day 1 for Site 1, (b) WT_FNN_LM framework for day 3 for Site 1, (c) WT_ELM_LM framework for day 1 for Site 1, (d)
WT_ELM_LM framework for day 5 for Site 1, (e) WT_NewCF_LM framework for day 2 for Site 1, (f) WT_NewCF_SCG framework for day 3 for Site 1.
Fig. 17. Error comparison of FFN, ELM and NewCF framework with LM of Site 1.
M.Q. Raza et al. Solar Energy 166 (2018) 226–241
237
13. WT_FNN_BP had lower MAPE than the same learning technique without
WT. For site 2, WT_FNN_LM again has better performance than the
WT_FNN_SCG and WT_FNN_BP framework.
A similar pattern of forecast performance is evident for ELM 4 with
the best performance for WT_ELM_LM in Table 5 and NewCF with WT_
NewCF _LM, with the exception for test day 4 for both sites and day 5
for Site 2 in Table 6. In terms of the other learning methods, overall the
performance of SCG with WT applied (WT_FNN_SCG, WT_ELM_SCG and
WT_NewCF_SCG) is between that of the corresponding BP and LM
learning methods. The forecast error of the WT_FNN_SCG, WT_ELM_SCG
and WT_NewCF_SCG varies in between (5%–7.8%), (5.9%–7.8%) and
(4%–6.3%), respectively.
It is also noticeable that for all learning methods the use of WT
applied to the output data resulted in better performance of the NN in
Fig. 18. Error comparison of FFN, ELM and NewCF framework with SCG of Site 1.
Fig. 19. Error comparison of FFN, ELM and NewCF framework with BP of Site 1.
M.Q. Raza et al. Solar Energy 166 (2018) 226–241
238
14. each case, with a few exceptions. It can be observed from Tables 4–6
that WT_FNN_BP, WT_ELM_BP and WT_NewCF_BP have higher forecast
error than the other comparative frameworks in all test days. This is due
to inadequate training of the NN and a recognised issue with the BP
algorithm regarding the update of the weight bias values while
searching the error space that results in it being caught in local minima,
rather than reaching the global minima (Wang et al., 2004; Raza et al.,
2016). Time series plots of clear, partial cloudy and cloudy day are
show in Figs. 13–15. FNN framework is BP learning technique produce
highest forecast error in comparison to SCG and LM technique.
Given the overall performance advantage evident for the LM
method with WT for all three NNs evaluated, Fig. 16(a)–(f) depicts
regression plots for predicted PV output (vertical axis) versus measured
PV output values (horizontal axis) of WT_FNN_LM, WT_ELM_LM and
WT_NewCF_LM for some selected performance scenarios. These days are
selected to demonstrate the output performance of the forecast frame-
works with wide variation of input data. Fig. 16(a) and (b) represents
the regression plot of WT_FNN_LM framework for day 1 and day 3
forecast respectively for site 1; and similarly for WT_ELM_LM in
Fig. 16(c) and (d). The regression plots of WT_NewCF_LM based
Fig. 20. Error comparison of FFN, ELM and NewCF framework with LM of Site 2.
Fig. 21. Error comparison of FFN, ELM and NewCF framework with SCG of Site 2.
M.Q. Raza et al. Solar Energy 166 (2018) 226–241
239
15. framework for day 2 and day 3 are shown in Fig. 16(a) and (b). The
performance of framework is calculated in terms R2 which indicates the
measure of predicted and actual values to fitted regression line as given
in Eq. (11). The higher value of R2 indicates the better capability of
forecast framework to predict the PV output. It can be observed from
Fig. 16 that, the NewCF based PV output framework produces higher R2
with value of 0.92 and 0.93, which provides coefficient of termination
than other frameworks. Therefore, the NewCF based forecast framework
outperform in forecasting accuracy than the FNN and ELM frameworks
with WT.
⎜ ⎟= −⎛
⎝
− ⎞
⎠
R
Var Z
Var Z
1
( 1)
( )
2
(11)
4.2. PV output forecast results for Site 1
The error box plots of FNN, ELM and NewCF based PV forecast
framework with different learning techniques and WT for Sites 1 and 2
are presented in this section. The forecast error comparison of FNN,
ELM and NewCF based framework with LM, SCG and BP techniques for
Site 1 are presented in Figs. 17–19 respectively. The purpose of these
plots to compare the performance of each NN framework with similar
learning techniques, input data and network parameters, alongside the
persistence model benchmark.
The median error values of WT_FNN_LM, WT_NewCF_LM and
WT_ELM_LM frameworks are 5.4%, 3.10% and 5.5% respectively for
Site 1 as shown in Fig. 17. In addition, FNN_LM, NewCF_LM and ELM_LM
frameworks is generated the median forecast error of 6%, 5%, and
4.9%, respectively. This indicates that the use of WT on the output data
across all NNs results in reduced forecast error. A similar forecast error
pattern is observed for different forecast framework comparison with
SCG and BP learning techniques. Noticeably the quartile range (as
shown by the length of the box in the box-plot in Fig. 17) of
WT_NewCF_LM is smaller than that of WT_FNN_LM and WT_ELM_LM.
This highlights that the output of WT_NewCF_LM based forecast fra-
mework is more consistent in terms of prediction accuracy than other
framework and forecast accuracy is also reasonably higher. A similar
forecast error pattern can also be observed with SCG and BP learning
techniques. It can be observed that, the size of NewCF framework’s
quartile with SCG and BP than the LM learning technique is smaller. It
indicates, the WT_NewCF_BP and WT_NewCF_SCG based PV output fra-
mework is less consistent in terms of forecast error than WT_NewCF_LM
framework. However, WT_NewCF_LM framework generates higher
forecast accuracy than the all other frameworks for Site 1 as highlighted
in Figs. 17–19.
4.3. PV output forecast results for Site 2
As above, Figs. 20–22 highlight the forecast error box plot of FNN,
ELM and NewCF based frameworks with LM, SCG, and BP learning
methods for Site 2. The median error of WT_FNN_LM, WT_NewCF_LM,
and WT_ELM_LM are 6.2%, 4.8% and 6.25% respectively. However,
median error of FNN_LM, NewCF_LM, and ELM_LM are 7.75%, 3.85%
and 7% receptively. The PV output forecast results for Site 2 indicates
that wavelet transformed framework gives higher forecast accuracy
than the individual predictors. It can be concluded that, the NewCF
based framework outperforms the FNN and ELM based framework with
different learning techniques. In addition, the forecast framework ac-
curacy is enhanced with wavelet transform. In addition, the
WT_NewCF_LM based framework produces the lowest error than other
neural network frameworks and learning techniques employed in this
research work.
The forecast accuracy is also depending on performance of data
acquisition system of solar PV parameters and their performance ana-
lysis can be viewed in Rezk et al. (2017).
5. Conclusions
This paper develops a series of forecast frameworks based on NN
ensembles to predict PV power output one day ahead. The assessment
used three different types of NN frameworks (FFN, ELM, and NewCF
ensembles), each trained with three different NN learning techniques
Fig. 22. Error comparison of FFN, ELM and NewCF framework with BP of Site 2.
M.Q. Raza et al. Solar Energy 166 (2018) 226–241
240
16. (LM, SCG, and BP algorithms), and with and without WT applied to the
output data. In total, six NN ensemble structures are created with 15
NNs in each ensemble structure. The output predictions of each neural
network ensemble are aggregated with a Bayesian model averaging
(BMA) technique.
The performance of FNN, ELM and NewCF based ensembles frame-
work were analyzed for the forecast of PV power output one day ahead.
Five days of power output from two sites were used to test the per-
formance of forecast framework. The findings indicate that the forecast
accuracy is enhanced by up to 27% in different forecast case studies
with WT. In addition, the LM trained neural network ensemble provides
training performance than the SCG and BP learning techniques with
aforementioned ensemble frameworks. Furthermore, the NewCF based
ensembles gives lower forecast error compared with FNN and ELM
ensemble networks. In addition, WT_NewCF_LM also provides higher
coefficient of determination (R2
=0.94) in comparison to other fra-
meworks with different learning technique. It can also conclude from
box plots, WT_NewCF_LM produce lowest (3.1 MAPE) error median and
distribution as compared with other ensemble framework and bench-
mark model. In future research, different aggregation algorithm and NN
ensemble network structures can be explored for higher forecast accu-
racy. The proposed frameworks can also be utilized for other forecast
applications such as wind speed, electricity price, and load demand.
References
I.E. Agency, 2007. “International Organization for Standardization (IEA–ISO),”
International standards to develop and promot energy efficiency and renewable en-
ergy sources' world energy congress, pp. 5–10.
Alexander, S.A., Karthikeyan, J., Sivavasanth, A., 2007. Applications of artificial neural
networks in power electronics. In: Conference on Computational Intelligence and
Multimedia Applications, 2007. International Conference on, vol. 1, pp. 267–271.
Almonacid, F., Pérez-Higueras, P., Fernández, E.F., Hontoria, L., 2014. A methodology
based on dynamic artificial neural network for short-term forecasting of the power
output of a PV generator. Energy Convers. Manage. 85, 389–398.
Anbazhagan, S., Kumarappan, N., 2014. Day-ahead deregulated electricity market price
forecasting using neural network input featured by DCT. Energy Convers. Manage.
78, 711–719 2.
Bessa, R.J., Trindade, A., Silva, C.S.P., Miranda, V., 2015. Probabilistic solar power
forecasting in smart grids using distributed information. Int. J. Electr. Power Energy
Syst. 72, 16–23 11.
Burnham, K.P., Anderson, D.R., 2002. Model Selection and Multimodel Inference: A
Practical Information-theoretic Approach. Springer Science & Business Media.
Catalao, J., Pousinho, H., Mendes, V., 2011. Hybrid wavelet-PSO-ANFIS approach for
short-term electricity prices forecasting. IEEE Trans. Power Syst. 26 (1), 137–144.
Chen, C., Duan, S., Cai, T., Liu, B., 2011. Online 24-h solar power forecasting based on
weather type classification using artificial neural network. Sol. Energy 85 (11),
2856–2870.
Chitsaz, H., Amjady, N., Zareipour, H., 2015. Wind power forecast using wavelet neural
network trained by improved Clonal selection algorithm. Energy Convers. Manage.
89, 588–598.
Chu, Y., Urquhart, B., Gohari, S.M.I., Pedro, H.T.C., Kleissl, J., Coimbra, C.F.M., 2015.
Short-term reforecasting of power output from a 48 MWe solar PV plant. Sol. Energy
112, 68–77 2.
Corazza, M., Fasano, G., Mason, F., 2014. An artificial neural network-based technique for
on-line hotel booking. Procedia Econ. Finance 15, 45–55.
Duan, Q., Ajami, N.K., Gao, X., Sorooshian, S., 2007. Multi-model ensemble hydrologic
prediction using Bayesian model averaging. Adv. Water Resour. 30 (5), 1371–1386 5.
Filik, U.B., Kurban, M., 2007. A new approach for the short-term load forecasting with
autoregressive and artificial neural network models. Int. J. Comput. Intelligence Res.
3 (1), 66–71.
Gao, Z., Ming, F., Hongling, Z., 2005. Bagging neural networks for predicting water
consumption. J. Commun. Comput. 2 (3), 19–24.
Georgakakos, K.P., Seo, D.-J., Gupta, H., Schaake, J., Butts, M.B., 2004. Towards the
characterization of streamflow simulation uncertainty through multimodel en-
sembles. J. Hydrol. 298 (1), 222–241.
Hassan, S., Khosravi, A., Jaafar, J., 2015. Examining performance of aggregation algo-
rithms for neural network-based electricity demand forecasting. Int. J. Electr. Power
Energy Syst. 64, 1098–1105 1// 2015.
Hoeting, D.M.J.A., Raftery, A.E., Volinsky, C.T., 1999. Bayesian model averaging: a tu-
torial. Stat. Sci. 14 (4), 382–417.
Izgi, E., Öztopal, A., Yerli, B., Kaymak, M.K., Şahin, A.D., 2012. Short–mid-term solar
power prediction by using artificial neural networks. Sol. Energy 86 (2), 725–733.
Jin, J., Li, M., Jin, L., 2014. Data normalization to accelerate training for linear neural net
to predict tropical cyclone tracks. Math. Probl. Eng. 501, 931629.
Kalogirou, S.A., 2001. Artificial neural networks in renewable energy systems applica-
tions: a review. Renew. Sustain. Energy Rev. 5 (4), 373–401 12.
kannan, C.K., Kamaraj, V., Paranjothi, S.R., 2012. Sensorless control of SR drive using
ANN and FPAA for automotive applications. Energy Procedia 14, 1831–1836.
Khatib, T., Mohamed, A., Sopian, K., Mahmoud, M., 2012. Assessment of artificial neural
networks for hourly solar radiation prediction. Int. J. Photoenergy 2012.
Krishnamurti, T., et al., 1999. Improved weather and seasonal climate forecasts from
multimodel superensemble. Science 285 (5433), 1548–1550.
Li, G., Shi, J., 2010. Application of Bayesian model averaging in modeling long-term wind
speed distributions. Renew. Energy 35 (6), 1192–1202 6.
Li, G., Shi, J., Zhou, J., 2011. Bayesian adaptive combination of short-term wind speed
forecasts from neural network models. Renewable Energy 36 (1), 352–359 1.
Li, S., Wang, P., Goel, L., 2015. Short-term load forecasting by wavelet transform and
evolutionary extreme learning machine. Electric Power Syst. Res. 122, 96–103 5.
Liu, W., Song, H., Liang, J.J., Qu, B., Qin, A.K., 2014. Neural network based on self-
adaptive differential evolution for ultra-short-term power load forecasting. In:
Intelligent Computing in Bioinformatics: Springer, pp. 403–412.
Magnus, J.R., Powell, O., Prüfer, P., 2010. A comparison of two model averaging tech-
niques with an application to growth empirics. J. Econometrics 154 (2), 139–153 2.
Mandal, P., Haque, A.U., Julian, M., Srivastava, A.K., Martinez, R., 2013. A novel hybrid
approach using wavelet, firefly algorithm, and fuzzy ARTMAP for day-ahead elec-
tricity price forecasting. Power Syst., IEEE Trans. 28 (2), 1041–1051.
Marquez, R., Coimbra, C.F., 2013. Intra-hour DNI forecasting based on cloud tracking
image analysis. Sol. Energy 91, 327–336.
Meteorology, A.B.o., 2009. Average Daily Solar Exposure. In: Meteorology, B.o., (Ed.).
Montgomery, J.M., Nyhan, B., 2010. Bayesian model averaging: Theoretical develop-
ments and practical applications. Political Anal. 18 (2), 245–270.
Oudjana, A.H.S.H., Mahamed, I.H., 2013. Power forecasting of future generation. World
Acad. Sci., Eng. Technol. 78, 499–503.
Pedro, H.T.C., Coimbra, C.F.M., 2012. Assessment of forecasting techniques for solar
power production with no exogenous inputs. Sol. Energy 86 (7), 2017–2028 7.
Raftery, A.E., Gneiting, T., Balabdaoui, F., Polakowski, M., 2005. Using Bayesian model
averaging to calibrate forecast ensembles. Mon. Weather Rev. 133 (5), 1155–1174
2005/05/01.
Raza, M.Q., Nadarajah, M., Ekanayake, C., 2016. On recent advances in PV output power
forecast. Sol. Energy 136 (Supplement C), 125–144 2016/10/15.
Raza, M.Q., Nadarajah, M., Ekanayake, C., 2017. Demand forecast of PV integrated
bioclimatic buildings using ensemble framework. Appl. Energy 208, 1626–1638 ISSN
0306-2619.
Rezk, H., Tyukhov, I., Al-Dhaifallah, M., Tikhonov, A., 2017. Performance of data ac-
quisition system for monitoring PV system parameters. Measurement 104, 204–211.
Sfetsos, A., Coonick, A., 2000. Univariate and multivariate forecasting of hourly solar
radiation with artificial intelligence techniques. Sol. Energy 68 (2), 169–178.
Singh, N.K., Singh, A.K., Tripathy, M., 2015. Short-term load/price forecasting in de-
regulated electric environment using ELMAN neural network. In: Energy Economics
and Environment (ICEEE), 2015 International Conference on, IEEE, pp. 1–6.
Tymvios, F., Jacovides, C., Michaelides, S., Scouteli, C., 2005. Comparative study of
Ångström’s and artificial neural networks’ methodologies in estimating global solar
radiation. Sol. Energy 78 (6), 752–762.
Wang, X.G., Tang, Z., Tamura, H., Ishii, M., Sun, W.D., 2004. An improved back-
propagation algorithm to avoid the local minima problem. Neurocomputing 56,
455–460 1.
Wasserman, L., 2000. Bayesian model selection and model averaging. J. Math. Psychol.
44 (1), 92–107 3.
Yang, H., et al., 2014. Solar irradiance forecasting using a ground-based sky imager de-
veloped at UC San Diego. Sol. Energy 103, 502–524.
Zhou, C., Ding, L.Y., He, R., 2013. PSO-based Elman neural network model for predictive
control of air chamber pressure in slurry shield tunneling under Yangtze River.
Autom. Constr. 36, 208–217 12.
M.Q. Raza et al. Solar Energy 166 (2018) 226–241
241