Posterior predictive model checks for disease mapping models
Disease incidence or disease mortality rates for small areas are often displayed on maps. Maps of raw rates, disease counts divided by the total population at risk, have been criticized as unreliable due to non-constant variance associated with heterogeneity in base population size. This has led to the use of model-based Bayes or empirical Bayes point estimates for map creation. Because the maps have important epidemiological and political consequences, for example, they are often used to identify small areas with unusually high or low unexplained risk, it is important that the assumptions of the underlying models be scrutinized. We review the use of posterior predictive model checks, which compare features of the observed data to the same features of replicate data generated under the model, for assessing model fitness. One crucial issue is whether extrema are potentially important epidemiological findings or merely evidence of poor model fit. We propose the use of the cross-validation posterior predictive distribution, obtained by reanalysing the data without a suspect small area, as a method for assessing whether the observed count in the area is consistent with the model. Because it may not be feasible to actually reanalyse the data for each suspect small area in large data sets, two methods for approximating the cross-validation posterior predictive distribution are described. Copyright 2000 John Wiley and Sons, Ltd.
Please refer to publisher version or contact your library.