Null hypothesis: Residuals are positively autocorrelated.
The DW statistic is 1.087. With 17 observations and K-1=2 df, the lower and upper critical values are 1.02 and 1.54, respectively. Therefore, the DW statistic is in the inconclusive range - we do not know whether the null hypothesis should be rejected. (You should also calculate the DW statistic by hand using the formula on page 118. You should get the same answer by hand as you get using SAS.)
Since the DW test is inconclusive, further investigation is warranted. A regression of the residuals against YEAR provides a nice plot to examine patterns in the residuals. There is CLEARLY a pattern. A straight downward-sloping line is followed by a curvilinear pattern with decreasing, then increasing, and again decreasing residuals. Not good!! This indicates possible autocorrelation.
a) Therefore, we shall look at the matrix of correlations (from the multiple regression). This will tell us how much the coefficients are correlated with each other. The correlation between AREA and YEAR is -0.9522. This is high and indicates multicollinearity.
b) Another way to check for multicollinearity is to regress YEAR on AREA (or AREA on YEAR) and look at the R-squared. It is 0.9067 (in both cases). This is very high and indicates multicollinearuty. The square root of the R-squared is the same as the correlation coefficient.
As for multicollinearity, there is no doubt: it is there. Again, the book has offered several suggestions on dealing with multicollinearity (p. 136). In this case, our best options is probably to throw out a variable, probably YEAR.
Let's look at the effects of multicollinearity in the linear and polynomial regressions for both D and C. For D, the standard error on LATITUDE is actually smaller in the polynomial regression as compared to the linear regression (0.0006 vs. 0.006). Furthermore, LATITUDE is not significant in the linear regression (p=0.3843), but it is significant in the polynomial regression (p=0.0001). This situation does not match the typical pattern of multicollinearity! What's going on? It looks like the problem of curvilinearity is overshadowing multicollinearity. The careful regression practitioner looks at the tolerances and correlation matrix to uncover the lurking multicollinearity.
How about C? The situation is slighty different. The standard error on LATITUDE is smaller in the linear regression than in the polynomial regression (0.0034 vs. 0.0358). Nevertheless, LATITUDE is not significant in the linear regression (p=0.4165), but it is significant in the polynomial regression (p=0.0004).
Looking at all of the evidence, we conclude that multicollinearity exists between the X variables.
Now on to C. Again, many numbers stay the same when using centered variables: the coefficient on the squared terms, the R-squared, the standared errors (and p values) on the squared terms - all stay the same! As with D, the standard error on CENLAT is smaller than on LATITUDE (0.0027 vs. 0.0358).
Similar to D, the p value on CENLAT is higher than on LATITUDE, only this time it is so high we do not reject the null hypothesis that B1=0! (The p value is 0.0923 as compared to 0.0004 in the non-centered regression). What is going on?! Well, first let's realize that we have banished multicollinearity; our tolerances are quite acceptable. Next, let's look at a scatterplot of predicted C vs. CENLAT (from the regression using CENLAT and CENLAT squared). Do you see how the parabola reaches its minimum point at just a little over zero? The fact that the coefficient on CENLAT is not significant means that it is possible that in real life the parabola touches ground at zero itself, and not above zero as we see in the scatterplot. If you have any remembrances of trigonometry, you will understand why this is so. Otherwise, worry not, but rather be content that we have solved the multicollinearity problem for both D and C.