Spatial statistical analysis and its consequences for spatial sampling
Spatial sampling of continuous resources does not always fit very well into the traditional theories of survey sampling, which are predicated on assuming a finite population of units to be sampled. A geostatistical model of the underlying phenomenon gives a powerful way of predicting unknown parts of the population. Even when the observable process is contaminated with measurement error, there is a straightforward way to filter it out by appropriately modifying the kriging equations. These methods of spatial analysis assume the locations of data are somehow deterministically chosen; however, in many instances, they are obtained from a (spatial) sampling design. The design typically has a component of randomization present to ensure against a biased sample and to provide a mechanism for computing means and variances of, say, estimated population totals. Indeed, some spatial- sampling literature has claimed, incorrectly, that only a randomization- based design and analysis is appropriate for inferences in a spatial setting. In this paper, we show that this approach is unnecessarily restrictive and, if followed, leads to both inflexible and inefficient statistical analyses. Under circumstances where both local and nonlinear predictions are needed, it is demonstrated that appropriate geostatistical analyses perform extremely well, even when the designs are randomization-based. Randomization has a role to play in controlling bias but, having chosen the sampling locations, this paper shows why (and how) spatial-proximity information should be used.