Slide 35: How good are the predictions?
The evaluation process within the model, and the final assessment of accuracy necessarily entail some sort of process whereby the predictions are appraised. The output data are typically probability values (that is, likelihoods of observing the species at a given site), whilst the observed data are obviously presence-absence data. Clearly these cannot be readily compared, and in order to get around, this, some threshold probability level must be derived or set, above which a species is counted as predicted present. Using this threshold value, a misclassification matrix (or confusion index) can be built up, as shown in the table at the bottom of the page.