On Likelihood

Michael Lavine. Dept of Mathematics & Statistics, U. Mass. Amherst

Both philosophically and in practice, statistics is dominated by frequentist (classical) and Bayesian thinking. But an alternative --- likelihood --- deserves more prominence. This paper describes a personal view of likelihood thought. The essence, applied to model comparison, is
1) models and parameter values within models should be compared by how well they describe data;
2) how well a model describes data is the likelihood function p(data | model);
3) therefore the likelihood function is a fundamental object of statistical inference.
Many statisticians agree with point (3) but act and write in ways that seem to us to conflict with points (1) and (2). A major purpose of this manuscript is to elucidate the implications of (1) and (2).

In a given problem, we do not necessarily believe that any model or set of parameters within a model is true, even if the model is non-parametric. When there are no true values and no true model, models and their parameters are only more or less elaborate and accurate descriptions of data. There is no meaning to the probability of accepting or rejecting a true hypothesis, the accuracy of estimating a true value, the bias of an estimator, the rate at which at estimator converges to a true value, the probability with which a confidence procedure covers a true value, or the probability that a true value lies in a given set. Nonetheless, it is still possible to quantify how well a data set is described by one model relative to another. That quantification is given by likelihood.

Not all statistical problems need be viewed as data description. But many problems that are currently framed as estimation, testing, regression, decision, and even prediction problems can be usefully recast as data description and viewed through the lens of likelihood. Likelihood thought deserves more prominence in statistics for the alternate perspective it gives.