Introduction
The development of avalanche-forecast systems keeps a great number of snow scientists busy. And for a very good reason: these systems promise to be invaluable as support tools. They use various models which emulate reality. Although they are becoming more and more sophisticated, these models are usually only very simplified reflections of reality, either because reality is only partly understood or describable or because the calculation tools (theoretical or technical) cannot deal with more complex representations.
Therefore, the following question must be answered: does the model give an appropriate reflection of observed reality ?
It is obvious that this question cannot be avoided if the model is to become an operational tool.
In this event, the model must be checked with the utmost rigour.
But how can this verification be done ?
In order to guarantee objective checking, the calculated values of the model must be compared to those actually measured. This test cannot be exhaustive and consequently cannot be a full inventory, as all possible cases of reality are not known (which is why a model is needed!). The validity of the model must therefore be established from verification of a restricted number of its assertions.
Thus, verification of a model is a survey based on a sample poll, that is to say a problem of inferential statistics, a technique which has as its aim establishing the likely characteristics of a particular group by observing only a sample of that group.
Principles
Here, we shall deal with the verification of boolean forecasts which predict either the occurrence or the non-occurrence of an event (the developed principles can be extended to other types of forecast).
The verification of a model providing boolean forecasts consists in estimating the proportion p of correct forecasts which can be expected from the model, based on the proportion p’ of correct forecasts which is calculated using an available sample S’ of size n.
Thus, p is the proportion of exact forecasts which could be calculated if absolutely all the forecasts that the model can ever provide could be verified. Therefore, p is an objective quantifier of the reliability of the model.
Although p cannot be calculated, an approximation can be given.
The sample is a finite set of forecasts which can be verified. It may appear as a collection of pairs (predicted event/observed event). Provided that the composition of this sample is completely left to chance, it may be considered a random sample of the infinite number of predictions that can be made (Fig. 1).
As by definition, the probability of any forecast being exact is p, and as the verification of each forecast is a test which is independent of the (n − 1) others and applied according to one of two modes, the number x of exact forecasts can be considered a binomial random variable with mean np and variance np (1 − p).
Consequently, the proportionp’ equivalent to x/n is also a binomial random variable with mean np/n and variance np(l − p)/n2, that is to say with mean p and variance p (1 − p)/n.
If the value of n is large and if p’ is not close to 0 or 1, the binomial distribution can be approximated by the normal distribution. With the help of the table of the cumulative normal distribution function, intervals containing p’ with a certain probability, can be determined. These intervals are usually called “confidence intervals”. For instance: , with a probability called a “coefficient of confidence” which is equal to 0.95.
Considering that the maximal value of p(l − p) is 0.25, we get : , that is to say , where K is a real number depending on the chosen coefficient of confidence and where is the accuracy of estimation of p.
Therefore, it is possible to estimate p depending on n and p’, without overestimating the value of p. For instance:
It is not surprising that the accuracy of the verification increases with the size of the sample (Fig. 2). Thus, for a proportion p’ = 50% recorded from a sample of 100, it is possible to assert that the odds are at least 95 out of 100 that the reliability of the tested model lies between 40% and 60%. The same proportion p’, recorded from a sample of 1000 would allow one to assert, with the same probability, that the reliability of the model is between 47% and 53% (rounded values). In fact, the accuracy varies in proportion to the square root of the sample size. Thus, in order to double the accuracy, the size of the sample must be multiplied by 4: accuracy turns out to be “expensive”. Certainty is also expensive: to reach a probability of 99% instead of 95% for the preceding estimation, the sample size has to be increased to 1877 instead of 1000.
In order to carry out a verification which might be usable in making forecasts, it is necessary to know the proportion of exact forecasts which can be expected from the model for the two forms of the forecast (occurrence and non-occurrence of the event). So it is necessary to evaluate the proportions of exact forecasts p1 and p2 from the proportions of exact forecasts p1’ and p2’, calculated respectively from a sample of occurrence and non-occurrence predictions (Fig. 3). These proportions p1 and p2 will show the model user how much he can trust a given forecast.
Application
The following example is given to show the use of inferential statistics in verifying predictions. A very simple model has been outlined which is expected to provide forecasts from cursory data. This model predicts whether at least one avalanche will occur in a given area during a given day. The model is based on the fact that most avalanches occur either during snowfalls, when snow is melting or after heavy snow transport by wind. This suggests that the occurrence of avalanches is a function of the thickness of the top layer of non-cohesive snow, on one hand, and of the intensity of the snow drift, on the other hand, which can be translated into the following formula:
with A, occurrence of at least one avalanche during the next hours; Ps, sinking of the first ram-penetrometer tube; FFt amount of snow caught by the driftometer (Reference BolognesiBolognesi, 1997) in 24 hours; α,β reals.
This simple model will now be evaluated according to the principles laid down previously. We have at our disposal a number of cases for which the forecasts can be produced (the necessary data being available) and verified (the occurrence like the non-occurrence of avalanches being established by the success or failure of daily trials to trigger avalanches in all of a given group of couloirs). Each of the forecasts established by the model will be verified in the following way: an occurrence forecast will be declared exact if at least one avalanche takes place either naturally or during trials, and a non-occurrence forecast will be classified as exact if no avalanche occurs in spite of release attempts.
In total, 278 such verified forecasts are available (this represents more than 5000 attempts to trigger avalanches!).
The proportion of exact forecasts which was calculated from a random sample of 100 forecasts, with α = 0.05 and β = 0, is 0.8.
The hypothesis according to which the proportion p’ of correct forecasts is a random variable approximately Gaussian is perfectly plausible, as shown by the distribution function of this variable (Fig. 4).
So we can state, with a probability of at least 0.95, that the proportion of exact forecasts which we can hope for in applying the model proposed above, lies between 0.7 and 0.9 (see Equation (1)).
The proportions of exact forecasts calculated from a random sample of 100 occurrence forecasts and from a random sample of 100 non-occurrence forecasts are both nearly equal to 0.8. This information is vital when using the model for operational forecasts: for instance, the user knows that occurrence predictions have to be taken into account because they are often right.
Discussion
The method of verification proposed here can be used to prove all types of models (which do not include any automatic learning procedure), providing that their forecasts or diagnoses are verifiable, that is to say, comparable to observed events or measured values. However some authors use other verification procedures if this requirement is not fulfilled (Reference Bois. and ObledBois and Obled, 1976; Reference Giraud, Brun., Durand and Martin.Giraud and others, 1994; Reference Föhn, Schweizer. and SivardièreFöhn and Schweizer, 1995).
This method is designed to help to make use of a model by indicating the probability that a predicted event will occur; therefore this indication is of great interest to forecasters who have to know how much they can trust a given prediction. However, knowing the probabilities that events will be predicted is a prerequisite to evaluating whether a model is usable or not. These probabilities can be calculated according to the same principles. Let us imagine a first (realistic) practical case: 1450 predictions from an avalanche-forecasting model are compared to observed events; this comparison shows that 350 avalanches occurred, whereas 400 avalanches were predicted and that 50 unexpected avalanches occurred (Fig. 5a).
We can infer from this comparison that:
The probability of any prediction being exact is 0.90 ± ∊1 .
The probability of any occurrence prediction being exact is 0.75 ± ∊2 .
The probability of any non-occurrence prediction being exact is 0.95 ± ∊3 .
The probability of any avalanche being predicted is 0.86 ± ∊4 .
The probability of any non-occurrence being predicted is 0.91 ± ∊5 .
According to these results, we can consider that this model is of interest to forecasters.
Now, let us imagine a second practical case: we have to evaluate a model which invariably predicts that no avalanche will occur (Fig. 5b). Because avalanches are not frequent events (24% in our example), this model seems to be reliable: the probability of any prediction being correct is 0.76 ± ∊. But the probability of any occurrence prediction being exact cannot be calculated and the probability that at least one avalanche will be predicted is 0! Consequently we can consider that such a model is fanciful.
Finally, the method can be extended to verify probabilistic forecasts. In order to do this, we have to compare probabilities to frequencies of events. We would be able to claim that the model is perfect if, for m occurrences predicted with probability P, mP occurrences are observed when m tends towards infinity and this whatever may be the value of P.
So to make a reliability measure of a probabilistic prediction model, it will be sufficient to estimate the probable frequencies of the occurrence of the event (according to the principles presented in section 1) for the various classes of probabilities calculated by the model.
Conclusion
The verification of a prediction model is neither a simple task nor an insignificant operation capable of being reduced to a few improvised tests.
Verification requires consistent samples, the gathering of which may demand more time (and money) than the creation of the model itself. Therefore, the design of the verification procedures should be an imperative preliminary stage for all scientific modelling projects, for it is true that it is futile to create a model incapable ofbeing verified.
Acknowledgements
Thanks are due to B. Corboz, O. Buser and F. Taillard for their helpful co-operation.