An experimental evaluation of choices of SSA forecasting parameters

Six time series related to atmospheric phenomena are used as inputs for experiments offorecasting with singular spectrum analysis (SSA). Existing methods for SSA parametersselection are compared throughout their forecasting accuracy relatively to an optimal aposteriori selection and to a naive forecasting methods. The comparison shows that awidespread practice of selecting longer windows leads often to poorer predictions. It alsoconfirms that the choices of the window length and of the grouping are essential. Withthe mean error of rainfall forecasting below 1.5%, SSA appears as a viable alternative forhorizons beyond two weeks.


I INTRODUCTION
Many naturally occurring dynamical systems are governed by a huge number of unknown parameters and laws.Most of the time the understanding of such systems seems beyond human reach.Yet, the ability to predict from past observations their future trajectory with an acceptable error is often of paramount importance.Such past observations form a time series and the main concern of this paper is the following time series forecasting problem Input: real-valued time series x 1 , . . ., x N and a forecast horizon h ∈ IN + , Output: estimated future values xN+1 , . . ., xN+h .Note that N is not fixed here and is a part of the input.As an example, one may think of daily recording of the number of births in a country, starting from, say, 1980 until today.
The forecasting consists in predicting the number of births that will take place tomorrow, or, more generally, each of next h days, if the forecast horizon expressed in the number of days ahead is h.In order to assess the quality of a forecasting method, one has to repeat the prediction every day over an extended period of time.One has then to compare formerly predicted values with actually recorded ones.Besides recurrent neural networks [25] and observable operator models [18], singular spectrum analysis (SSA) [23] provides one of the most promising forecasting frameworks.Indeed, the experiments of [20] show that the SSA outperforms standard statistical methods in the field such as ARIMA (see e.g.[28]).
This paper reports an experimental investigation of univariate time series prediction using SSA related method, namely the vector forecasting (see e.g.[23]).The SSA involves several computational steps among which the vector forecasting may be seen as a final one.In its most basic version, the SSA has to be provided with an additional information that may require an expert knowledge about the observed phenomenon.Is it possible to reduce such an additional input so that SSA forecasting could be applied to phenomena of unknown dynamics by a user with no knowledge in the underlying field nor in statistics ?In other words, is SSA forecasting methodology ready to be implemented within decision support tools that would be adopted by policy makers, farmers or in business administration ?
A partially positive answer to the latter question comes from the availability of several software packages for the SSA, notably SSA-MTM toolkit [10] and the R library Rssa [21,26].Although for an end user, SSA-MTM toolkit may be more suitable due to its graphical interface, the choice of Rssa for experiments reported here has been motivated by an intrinsic flexibility of a library.Nevertheless, besides an input time series, both SSA-MTM toolkit and Rssa must be provided with a window length and a list of components to be grouped together for the forecasting. 1However, finding the best such parameters is a challenge, even for SSA specialists or experts of the observed phenomenon.Moreover, with an inadequate choice, the forecasting accuracy can drop dramatically, as confirmed by this study.Hopefully, several authors discuss methods for inferring automatically from the input times series those parameters.The present paper reports an experimental study of forecast quality obtained using such methods.Can those methods be included in decision-support tools which automatically select suitable parameters for the SSA and compute the required forecast ?Concerning the window length, the present paper brings a positive answer.The result of authors' investigation is that, among very few existing methods, the one of [15] gives the best accuracy of forecasting, at least for meteorological time series used in this paper.Unfortunately, concerning the grouping, the accuracy of the only known truly automated method (available in Rssa package [21]) is not always satisfactory.Although a suggestion for improvement is given at the end of Sect.VIII, one of the conclusions of the paper is that truly automated grouping algorithms or viable heuristics need yet to be developed and implemented.
In order to recall the importance of the length of the window and the choice of components to group, the next section reviews the essential SSA steps that lead to the vector forecasting further explained in Sect.III.Sect.IV and V review a few methods for selecting SSA parameters.The data sets used in the reported experiments are introduced in Sect.VI and the experiments are described in Sect.VII.Their outcome is discussed in Sect.VIII.Throughout this paper, [n] stands for {1, . . .n}.

II SSA STEPS TOWARDS THE VECTOR FORECASTING
The history of the SSA has been sketched in many papers and several books (e.g. in [23]) and it is omitted in the present article.
As its name suggests, the SSA is a spectral method using the singular value decomposition [1] for the analysis of times series.SSA based forecasting techniques are important extensions of the SSA itself.The latter consists of the following steps: 1. the embedding of the input time series in a vector space, 2. the singular value decomposition (SVD) of the trajectory matrix, 3. obtaining elementary matrices and their Hankelizations, 4. the inverse embedding yielding the time series decomposition, 5. the grouping of the time series components.These steps are now explained.

Embedding
The first step of the SSA is an embedding of a real-valued input time series X = (x 1 , . . ., x N ), N > 2, into a vector space spanned by a sequence of which is a Hankel matrix, viz., the elements of each anti-diagonal are equal.The k-th anti-diagonal consists of those element of the matrix that are indexed by A k defined in Eq. ( 1) below.Note that this embedding is a bijection between the set of sequences of length N and the set of L × K Hankel matrices.Consequently, by the inverse embedding, from any m × n Hankel matrix, one gets the corresponding sequence of length m + n − 1.
The parameter L is essential for the whole SSA and is usually chosen so that L ≤ N /2.This is assumed throughout the paper so as to keep L < K.As the forecasting problem becomes trivial when the rank of X is strictly less than L, that special case is not considered here in order to simplify the mathematical treatment.Indeed, when the input time series comes from an intricate dynamical system, as those used in this paper, one cannot expect that the rank of X is strictly less than L.
The above embedding X ↦ X may be interpreted as a representation of X by a trajectory of a hypothetical dynamical system that generated X.

Singular value decomposition
In the second step of the SSA, the singular value decomposition (SVD) [1] of the trajectory matrix is computed: IR K×K are unitary matrices and, Σ ∈ IR L×K is a rectangular diagonal matrix with diagonal

Obtaining elementary matrices
where U k (resp.V k ) is called left (resp.right) singular vector for singular value σ k , yields an elementary matrix Note that all non null elementary matrices in this decomposition are pairwise orthogonal.
By averaging over anti-diagonals, from an elementary matrix X k one gets its Hankelization (see also [27]) Xk ∈ IR L×K .More precisely the k-th anti-diagonal of an L × K matrix has its indexes in and the Hankelization Xk of X k is defined by where xk,i,j (resp.x k,p,q ) stands for the element at row i (resp.p) and column j (resp.q) of Xk (resp.X k ).

Inverse embedding
As mentioned in Subsection 2.1, the embedding is a bijection between the set of sequences of length N and the set of L × K Hankel matrices.Thus, on every Hankelized elementary matrix Xk one can apply the inverse embedding and obtain a time series written X k and called an elementary component time series.The latter satisfy X = ∑ L k=1 X k .

Grouping
This step consists in defining a partition J ⊆ 2 [L] of [L] so that each cluster I ∈ J determines a "group" X I ∶= ∑ k∈I X k of elementary component time series which is relevant for the input time series analysis.The relevance of such groups is merely subjective.One may wish for instance obtaining three groups reflecting respectively the trend, the seasonality and the noise.
For the forecasting method used in this study, the grouping aims to abstract from inessential details of the observed trajectory.Thus it consists in choosing a strict subset I of [L] to get the "signal" X I ∶= ∑ k∈I X k separated from the "noise" X I ∶= ∑ k∈[L]∖I X k .Similarly, one may write X as the sum of its relevant parts X I ∶= ∑ k∈I X k and its noisy part X I ∶= ∑ k∈[L]∖I X k or their Hankelizations X = XI + XI .The reader should note that I is another additional input for the SSA and that the choice of I greatly impacts the quality of forecasting [20].

III VECTOR FORECASTING
The vector forecasting is not a part of the SSA per se.Like two other forecasting methods described in [23], the vector forecasting uses the SSA for extracting a low rank approximation of a subspace of a hypothetical dynamical system that generated X.The common idea of the three forecasting methods presented in [23] is to find a homogeneous linear recurrent equation (LRE) that is satisfied by "denoised" time series X I = (x I,1 , . . ., x I,N ).For X I to satisfy an LRE means it is generated by a linear dynamical system.This lets X I a wide class of behaviours.
Although "linearity" may sound restrictive for a computer scientist, within the theory of dynamical systems it refers to the underlying evolution functions not to the behaviour of the system itself.Indeed, it is well known that linear dynamical systems behave like a sum of products of polynomials, exponentials and sinusoids [22].Finding coefficients a ∶= (a L−1 , . . ., a 1 ) of LRE (2) amounts to solving the following system of K equations with L − 1 unknowns where X I denotes matrix X I without its last row which in turn forms the right hand side of the system of equations.As L ≤ N /2 and hence L < K, this system is overdetermined, and in general, has no exact solution, except when X I is actually governed by a linear dynamical system of dimension at most L.However, as dynamical systems considered in this paper are not linear, and this is also the case for all systems of intricate dynamics, only approximate solutions of (3) can be obtained.This can be compared with a linear regression where one fits a line to a set of points in an optimal way.In LRE-based forecasting, such as the vector forecasting used in this paper, one fits a linear recurrent sequence to the set of observations.For that, the closest approximate solution of (3) with respect to the Euclidean norm is sought.It is well known that such an approximate solution is given by a ≈ (x I,L , . . ., x I,N )X † I , where "_ † " stands for the pseudo-inverse of Moore-Penrose of a rectangular matrix [2].
Let U I ∶= [U i ∶ i ∈ I] be the matrix formed with columns of U with indexes in I and let U I be the subspace of IR L spanned by columns of U I .Remember that U is the matrix of left singular vectors in the SVD of X.Let (u L,i ∶ i ∈ I) be the last row of U I and let U I = [U i ∶ i ∈ I] stand for U I with its last row removed.Similarly, let U I be the subspace of IR L−1 spanned by columns of U I .Note that (u L,i ∶ i ∈ I) can be expressed as a linear combination of rows of L,i .Observe that υ = cos θ, where θ is the angle between U I and e L = (0, . . ., 0, 1) ⊺ ∈ IR L .Indeed, the orthogonal projection of e L onto U I is expressed by U is unitary.As e L ∉ U I , one has υ ≠ 1 and the following vector is well defined defines the orthogonal projection of IR L−1 onto U I .Let ž be a vector z ∈ IR L without its first coordinate.Using a linear operator F∶ IR L → U I which extends the orthogonal projection Πž of ž with the next term of the recurrent sequence inferred from ( 2) one defines a sequence of vectors where [X I,1 , . . ., X I,K ] = X I and h ∈ IN is a forecast horizon.This leads to matrix obtained by extending X I on the right with vectors Y K+1 , . . ., Y N +h resulting from iterating F on X K , where Y K+i = F i X K .In this context, operator F can be understood as a recurrence over vectors of U I obtained by an appropriate lifting of LRE (2).By Hankelization of Y and its subsequent inverse embedding, one gets a time series Y = (y 1 , . . ., y N +h+L−1 ) where the portion (y N +1 , . . ., y N +h ) is the forecast up to horizon h obtained by the vector forecasting method.

IV CHOICE OF THE WINDOW LENGTH
From the presentation of the SSA method including the vector forecasting, it follows that the length of the window, L, is a crucial parameter.Indeed, L should be understood as the chosen dimension for the model, built from the SSA, of the observed dynamical system.It determines the order of LRE (2) which is precisely L − 1. Foundational texts (e.g.[4,6]) and books [7,8,23] give no general estimation methods of this parameter.The prevailing opinion is that choosing L only slightly less than N /2 allows capturing all significant frequencies of periodic components of the underlying dynamical system.Choosing L equal to the longest oscillation period or a multiple of that period not exceeding N /2 is also often suggested.Unfortunately, if the data comes from a poorly understood dynamical system, such a period is unknown.Moreover, the outcome of experiments reported in Sect.VIII suggests that such choices can result in accuracies not much better than those of the random forecasting.Therefore, providing methods and, more importantly, efficient algorithms for estimating L from the input time series is essential.
An appealing formal approach for estimating an adequate window length is developed in [13] throughout an adaptation to the SSA of the minimum description length principle (see e.g.[11,24]) better known as Kolmogorov complexity.The method consist in a crossoptimisation of two functions, say f (L, M ) and g(L, M ) wrt.L and M .This yields an estimation of L and also of the number M of the most significant components of X to be considered as signal.Unfortunately, for each evaluation step of f (L, M ) or g(L, M ), singular values of X have to be computed because X depends on L. As a practical rule, the authors of [13] suggest to take (log N ) c with c ∈ (1.5, 2.5) as an upper bound for L.Although (log N ) c ∈ o(N ) and therefore (log N ) c ≪ N /2, for N sufficiently large, with c = 2.5 the method is still computationally demanding when X is a time series with daily samples over, say, 50 years, because the maximum value of L then equals 301.Indeed, in case of an exhaustive search over L ∈ {2, . . ., 301}, one has to perform 300 singular value decompositions (SVD).Beyond formal demonstrations, in [17] and [16] the authors of [13] provide an experimental evaluation of their method on real world data sets which confirms that choosing L much smaller than N /2 significantly improves the quality of forecasting.

Several authors use the autocorrelation function
) is the empirical mean (resp.empirical variance) of X.In [12], the smallest value of τ where R crosses the confidence interval corresponding to (95% of) the white Gaussian noise (with parameters x and s) is used as estimate of L. In [15] and [19] the smallest value of τ such that R(τ )R(τ + 1) < 0 is used as estimate of L.
In the sequel, L [15] stands for the length of the window chosen with the latter method whereas L lo and L hi denote two extreme values for the maximum window length in {(log N ) c ∶ c ∈ (1.5, 2.5)} discussed formerly.

V CHOICE OF THE GROUPING
The choice of index set I of components that are used as signal in forecasting is as essential as the choice of the window length.Indeed, as mentioned earlier, the SSA should be understood merely as a method for separating the true signal from the noise within the raw signal obtained from observations.Besides theoretical results, the experiments carried in [16] show that both the grouping and the window length selection have a tremendous impact on forecast accuracy.This is not a surprise as both affect directly LRE (2).
On the contrary to the window length where the search space is linear in N (yet brute force methods are limited by the computationally costly step of SVD), the search space for grouping is in O(2 L ).Several authors only consider groupings such that I = [M ] with M ∈ [L − 1] which lets reducing the search space into O(L).In other words, the signal is selected as the first M elementary components of X, viz., I ∶= [M ] and This shall be called a prefix grouping in the sequel.
A common practice for the grouping (see e.g.[23]) is to rely on visual examination of scatter plots and recurrence plots which involves subjective assessment of parameters.Although, pattern recognition techniques can be used within this approach, those also require some parameters.
The R package Rssa implements two methods for the grouping.The first method uses frequency analysis via discrete Fourier transform.Again, as it requires additional parameters, it cannot be qualified as automated grouping.The second method runs a clustering algorithm using a similarity matrix between time series' elementary components.That similarity matrix, (s i,j ) ∈ IR L×L , which is more precisely a w-correlation matrix, is defined upon the following weighted inner product where Y and Z are time series of length N , and the corresponding weighted norm Remember that A i , defined in Sect.II Eq. ( 1) is the set of indexes of the i-th anti-diagonal of an L × K matrix.The w-correlation matrix is defined as follows There is no a priori restriction on the form of index set I of "signal" resulting from clustering by grouping.auto.wcor.On the contrary, the method of [13] based on minimum description length yields M ∈ [L − 1] to be used as prefix grouping [M ].Unfortunately, the method is difficult to implement.No algorithm is clearly stated.Implementations, if any, do not seem available in the public domain.

VI DATA SETS
Several methods discussed above have been evaluated as a part of the present work using real world data summarised in Table 1.The data sets used here have been downloaded from the ERA-Interim archive of the European Centre for Medium-range Weather Forecasts (ECMWF).The ERA-Interim archives historical forecasts for horizons from 0 to 240 hours.These forecasts consist of reanalysis data.The meaning of "reanalysis" for horizon 0 is that the data either come form observations or, if an observation is unavailable at a given location, the corresponding value is interpolated using a meteorological model.Whether a data set comes entirely from observations or has some interpolated parts (due e.g. to a time outage of a recording station) does not matter for this study, in view of a high accuracy of interpolation of those specialised models.On the contrary, the location of each data set matters, as explained in the sequel.It should be noted that all data sets used here are "forecasts" for horizon 0 which means that these are not predicted but rather measured values or, exceptionally, interpolated ones.
It is worth noting that some informal recommendations about choices of the window length and of the grouping have been evaluated in several papers mainly on synthetic data.When carried out on real world data, the evaluations invalidated the relevance of those recommendations.The authors preference for meteorological data is guided not only by the availability but also by the importance of atmospheric and oceanic phenomena in many areas of life.The data sets chosen for this paper concern such phenomena at the northern extremity of the Mozambique Channel ("Grande Comore") and at several locations in Madagascar.This region of the Western Indian Ocean has a tropical climate experiencing different micro-climates from part to part.In addition to the concern to compare different climatic parameters, each location also has particularities.Maevatanana, (resp.Ambatolampy, Ambovombe, Antananarivo), is the place reputed to be the hottest (resp.the coldest, the driest, the most polluted) on the island.Marovoay is an area with high agricultural potential.The study of rainfall is therefore as interesting as it is essential.The study of the atmospheric pressure in Grande Comore is mainly motivated by forecasting the trajectories of cyclones.
All recordings are daily and start from 1979-01-01.Both water vapor and ozone are expressed in kg/m2 representing their total amount in a column extending from the surface of the Earth to the top of the atmosphere.

VII EXPERIMENTS
The aim of numerical experiments reported in this paper was to asses the quality of SSA forecasting from user's point of view for a short duration with horizon h ∈ [30].
Here "short duration" is relative to the length of the time series.In fact, meteorologist speak of medium-range when h ∈ [10] and long-range when the horizon exceeds 7 days although these limits are not strict.Specialised meteorological models have excellent accuracy of forecasts up to 5 days.The choice for h ∈ [30] is motivated by a potential future comparative study of forecasting accuracies of specialised meteorological models vs. general-purpose time series forecasting methods such as those from the SSA.For each data set, the forecast has been computed on every day of the last year of data, except on December 31.For a given horizon h, this resulted in 365 − h forecasting days, except 366 − h forecasting days for leap year 2016 (Grande Comore time series only).Let D h (resp.F h ) denote the set of forecasting (resp.forecasted) days for horizon h.For every forecasting day j ∈ D h , computing a forecast consisted in taking X ≤j ∶= (x 1 , . . ., x j ) as input time series for 1. estimating the window length (see Sect.IV), 2. embedding and decomposition (see Sect.II), 3. grouping (see Sect.V), 4. vector forecasting (see Sect.III), where the two latter steps were repeated using various grouping choices in order to collect corresponding forecasts.Thus, for a fixed method of the window length estimation, the most computationally expensive part, namely SVD, was computed only once for each forecasting day.Assuming the methods for the window length and the grouping are fixed, by repeating steps 1-4 for all forecasting days, one gets vectors (L j ∶ j ∈ D 1 ) and (I j ∶ j ∈ D 1 ) of window lengths and groupings, and, for every horizon h, a vector of forecasted values where y h,j is the value for day j + h forecasted from X ≤j (as if it were done on day j).The forecasting error vector for h, is therefore ξ h ∶= y h − x h , where x h = (x j ∶ j ∈ F h ) is the vector of the corresponding actual values, viz., the corresponding terminal portion of X.The mean (resp.maximum) error for h is mean(|ξ h |) (resp.max(|ξ h |)) where "mean" stands for the arithmetic mean.These absolute errors have their relative variants, each one defined as the ratio of the corresponding absolute error divided by span of the data, namely mean(|ξ h |) max(X) − min(X) and max(|ξ h |) max(X) − min(X) .
Although when it comes to forecasting, the mean squared error is mostly used, the authors believe that the arithmetic mean has a clear intuitive meaning for an average user.By the way, the maximum error can also be important in many forecasting tasks.It can for instance bring some insight about the ability to forecast extreme events.This is particularly important for phenomena expressed by data sets used in this study.
For comparison with SSA vector forecasting, the following "naive" forecasting methods have been used as benchmark at every forecasting day j ∈ D h : • random forecast -the forecasted value is sampled from the distribution inferred from X ≤j , • constant forecast -the forecasted value equals x j , • regression based forecast -uses polynomial regression (with polynomials of degree 4) from X ≤j to extrapolate the value used as the forecast.Only rough evaluation of forecast quality with varying window lengths (see  been conducted because of an important computational cost of the decomposition step. On an ad hoc basis, a part of this evaluation was done for window length L big,j taken as the largest multiple of a mean year duration 365.25 smaller than j/2, together with prefix grouping I big,j = [M big,j ] for M big,j = L big,j − 1.Another part was done for L hi,j together with prefix grouping I hi,j = [M hi,j ] for M hi,j = L hi,j − 1. Remember that L lo,j = ⌈(log j) 1.5 ⌉ and L hi,j = ⌊(log j) 2.5 ⌋ are extreme integer values of possible window sizes suggested in [13].
A deeper evaluation relied essentially on the method of [15] with estimating the window length for X ≤j written L [15] j .In a more systematic way, for each data set, and every forecasting day, L [15] j , L lo,j and L hi,j have been computed but only L [15] j and L lo,j have been used for forecasting with various groupings.More precisely, prefix groupings have been performed for all M j ∈ [L [15] j ] (resp.M j ∈ [L lo,j ]).Fig. 1 displays a result of such an exhaustive evaluation.By averaging over the last year of the time series, the best a posteriori prefix grouping I mean = [M mean ] (resp.I max = [M max ]) with respect to the mean (resp.maximum) forecast error has been selected for comparing with automated groupings I auto,j computed by grouping.auto.wcor.Also the closest neighbourhood V mean (resp.V max ) of I mean (resp.I max ) has been examined.This neighbourhood consists of all index sets that differ from I mean (resp.I max ) by one element only: All numerical evaluations were programmed in Python and R. The programs are available upon request from the corresponding author.

VIII RESULTS AND DISCUSSION
As a preamble to this section, it is worth mentioning that the SSA and the corresponding forecasting methods are not considered as learning algorithms.Nevertheless, some analogies with machine learning or their lack are worth highlighting.1. Undertraining happens exactly as in machine learning when the input time series is too short to capture all essential behaviour of the observed dynamical system.In other words, there is not enough of observations.
2. Overfitting results from the choice of index set I including too many inessential components (obtained using SVD).This choice, called "grouping" is discussed in Sect.V. Since those inessential components are considered as noise, a model (an LRE in the case of SSA forecasting) capturing such noise is overfitted.3. Underfitting is, as usual, the opposite of overfitting.It occurs when index set I does not include enough of essential components.The reader should keep in mind that I ⊆ [L].Thus, when L is too small, the underfitting cannot be compensated by a good choice of I. 4. The choice of window length L does not seem to have a straightforward machine learning counterpart.It can be understood as the choice of the dimension of the model.This is for instance similar to the choice of the number of states of a hidden Markov model (see e.g.[5]) required by spectral learning algorithms [14] or by the classical Baum-Welch algorithm [3,9].The choice of L can be also compared to the choice of the degree of the polynomial in polynomial regression.Choosing this degree too big leads typically to an overfitting.The potential overfitting when L is too big can be however compensated to some extent by the choice of I.While varying day j over D 1 , computed window lengths L lo,j , L hi,j .and L [15] j remained stable (see Table 2).Interestingly, for every considered data set and each j ∈ D 1 , one has This is surprising as L lo,j and L hi,j are the extreme integer values within the real interval [(log j) 1.5 , (log j) 2.5 ] which depends only on length j of X ≤j .On the other hand, L [15] j depends not only on j but also on the autocorrelation function (see Eq. 4).Consequently, on the contrary to L lo,j and L hi,j , L [15] j depends on the values of X ≤j .The above inequality is not a general rule and could be attributed to a specific nature of considered data sets, all resulting from atmospheric and oceanic phenomena.
A rough evaluation of forecasting with parameters (L big,j , I big,j ) and (L hi,j , I hi,j ) confirms the findings of [13,16,17] and [20] that using longer windows do not improve forecast accuracy.Indeed, the forecasts obtained within the present experimentation with parameters (L big,j , I big,j ) were not better than random ones.Even with (L hi,j , I hi,j ) the accuracy was only slightly better than using random forecast.The only convincing results come with L are better than with L lo,j .Consequently the method of [15] based on the autocorrelation function (see Eq. ( 4)) is favoured by the authors as it is easy to implement and gives satisfactory results.shows that the accuracy of forecasting is very sensitive to it.One could think that including more components would better capture the dynamics via an LRE but often the opposite is true.Fig. 1 shows how, in prefix grouping, decreasing the number of components affects the forecast.The groupings considered in this example range from [88] down to [1].One can observe that the mean error does not show any regular pattern.The best accuracy is obtained with I = [2] when comparing mean errors averaged over all horizons from 1 to 30.
In all time series studied here, the optimal prefix grouping is compared with with clusteringbased grouping computed by function grouping.auto.wcor.This is displayed on Fig. 2 to 7 which show the mean error per horizon.Similar plots for maximum error appear in Appendix A and use the same colour codes.The errors are plotted in blue for optimum prefix grouping, in purple for the automated grouping, in green for polynomial regression based forecast, in yellow for constant forecast and in red for random forecast.All errors are given relatively to the span max(X) − min(X) of the data set as precised on the left edge of each plot.A surprising observation common to all data sets discussed here is that grouping.auto.wcoralways returned a prefix grouping.This is not a general rule.Indeed, for other time series, grouping.auto.wcormay return a cluster of non prefix groupings.Every plot on Fig. 2 to 7 has on top the list of groupings, in parentheses, computed by grouping.auto.wcorwhen varying j ∈ D 1 .More precisely, an integer k appearing on the list, means that for some j ∈ D 1 , grouping.auto.wcorreturned index set [k] after taking X ≤j as the input time series.It should be noted here that across D 1 , very few different grouping are obtained and that their variation is non-monotonic.As for all plotted forecast errors for horizon h, the one resulting from automated grouping is obtained by averaging over j ∈ D h .When the mean error is concerned (Fig. 2 to 7) the forecasts using automated groupings computed by grouping.auto.wcorclearly appear as sub-optimal.For Marovoay rainfall (Fig. 4), that forecast is even close to random forecast and for Maevatanana maximum temperature (Fig. 2) a polynomial regression predicts more accurately.A systematic examination of prefix groupings for L lo confirms the above observations about automated grouping.It also lets comparing forecast accuracy for window lengths L [15] and L lo (see figures in Appendix B).As far as mean errors are used for comparison, the automated (resp.optimal) grouping with L lo underperforms the automated (resp.optimal) grouping with L [15] in all examples studied, except for Marovoay rainfall.However, when one uses maximum errors (right column) instead of mean errors (left column), no winner can be clearly declared.Moreover, the plots of maximum errors for L [15] (see Appendix A) seem to show that the SSA is unsuitable for forecasting when maximum errors are the main concern.Indeed, even with optimal prefix grouping, the maximum error is mostly beyond 30%.This is perhaps a general drawback of all general-purpose forecasting methods for time series, as the maximum error criterion seems to be deliberately avoided in the corresponding literature.As all groupings discussed in this section are prefix ones, one may ask if, for a given window size, the optimal prefix grouping remains optimal among all groupings.The authors do not know the answer although they observed that the values of the errors in the closest neighbourhoods V mean or V max (see Eq. ( 5)) of each optimal prefix groupings (plotted in blue on all figures except on Fig. 1) always exceed those of the latter.Therefore, each optimum prefix grouping for data sets considered here form a local minimum.The authors do not know whether this observation could be turned into a theorem nor if such local minima are also global ones.In any case, the latter observation leads to the conclusion that a viable strategy for improving the automated grouping would be to start with the value yield by grouping.auto.wcorand find a nearest local optimum for prefix grouping.
The authors believe that similar results to theirs would be obtained from other real world data, as long as they come from systems of intricate dynamics and from sufficiently long time series.

IX CONCLUSION
The experiments reported in this paper confirm that the choice of the window length and of the grouping are essential for the accuracy of SSA forecasting.The window length selection method of [15] together with an adequate grouping enables forecasting with an accuracy significantly better than constant or random forecasting, provided that the mean error is considered.However, the reader should keep in mind that each adequate (optimal) grouping has been selected via an a posteriori evaluation.The only widely available method for an automated a priori grouping, namely function grouping.auto.wcorfrom Rssa package, appears as sub-optimal in the analysed examples.Consequently, this is where the research on the SSA should focus in order to make SSA forecasting ready to be included in decision-support tools.In the meantime, the results of this study suggest the following strategy for a practitioner.
1.For every forecasting day, use a fixed-size suffix of the available time series as test data.2. Perform a local search of a minarg, say M G , around the set, say G, returned by grouping.auto.wcorupon applying SSA forecasting on the training data (viz., the portion of the time series without the test suffix) using test data for evaluating the error, say e M G . 3. Similarly, perform a local search for minarg, say M [2] , around prefix grouping {1, 2}.
Let e M [2] be the corresponding evaluation of the error.4. For today's forecast, perform SSA steps up to grouping using the whole available time series.5.If e M [2] ≤ e M G , use group M [2] for computing today's forecast.6.If e M [2] > e M G , let G ′ be the group returned by grouping.auto.wcor.For today's forecast use group M G ′ obtained from G ′ by translation analogous to that from G to M G .The results also suggest a preference of the window length selection method of [15] -it outperforms other methods evaluated in reported experiments.If however an implementation of the method of [13] is available, it could be advantageously used, as it simultaneously selects both the window length and the grouping.
When the maximum error matters, SSA forecasting seems rather unsuitable for short horizon prediction, at least for atmospheric/oceanic phenomena illustrated by the time series used in this study.The lack of literature with the maximum error criterion does not let suggest an alternative.
When examining plots of forecasting errors using various methods, one could ask, what could be learned from those about the dynamics of the underlying phenomena.Somewhat intriguing is the fact that the relative position of the plots differs substantially form one time series to another.How to explain that among "naive" forecasting methods the constant one is the worst for Ambatolampy minimum temperature (see Fig 3) but the best one for Maevatanana maximum temperature (see Fig 2 ) ?

A P P E N D I C E S
A MAXIMUM ERROR PLOTS FOR L [15]

B COMPARATIVE PLOTS FOR TWO WINDOW LENGTHS
The following plots compare the accuracy of vector forecasting for window lengths L [15]  and L lo .Every label of a legend gives the window size followed by either "auto" for automated grouping using grouping.auto.wcoror the optimal prefix grouping [M ].For L lo (resp.L [15] ) the accuracy is plotted in blue (resp.green) for automated grouping and in orange (resp.red) for optimal grouping.The reader may check Table 2 to avoid confusion between L [15] and L lo .Note that the accuracy obtained with L hi do not appear on the following plots as it is significantly worse than with L lo and L [15] .

Figure 2 :
Figure 2: Forecast mean relative error for Maevatanana maximum temperature

Figure 3 :
Figure 3: Forecast mean relative error for Ambatolampy minimum temperature

Figure 4 :
Figure 4: Forecast mean relative error for Marovoay rainfall and L lo,j .These are depicted in Appendix B. Their comparison lets conclude that the results obtained with L [15] j

Figure 5 :
Figure 5: Forecast mean relative error for Ambovombe vapor

Figure 6 :
Figure 6: Forecast mean relative error for Antananarivo ozone

Figure 7 :
Figure 7: Forecast mean relative error for Grande Comore atmospheric pressure

Table 1 :
Data summaryFunction grouping.auto.wcorfrom Rssa implements the latter clustering-based grouping.It can be considered as a fully automated grouping method.

Table 2 :
Window lengths computed using three methods