Volume: | 20 | 19 | 18 | 17 | 16 | 15 | 14 | 13 | 12 | 11 | 10 | 9 | 8 | 7 | 6 | 5 | 4 | 3 | 2 | 1 |

A peer-reviewed electronic journal. ISSN 1531-7714

Copyright is retained by the first or sole author, who grants right of first publication to |

Yu, Chong Ho (2003). Resampling methods: concepts, applications, and justification. Practical Assessment, Research & Evaluation, 8(19). Retrieved March 26, 2015 from http://PAREonline.net/getvn.asp?v=8&n=19 . This paper has been viewed 86,985 times since 9/29/2003.
## Resampling methods: Concepts, Applications, and JustificationChong Ho YuAries Technology/Cisco Systems ## IntroductionIn recent years many emerging statistical analytical tools, such as exploratory data analysis (EDA), data visualization, robust procedures, and resampling methods, have been gaining attention among psychological and educational researchers. However, many researchers tend to embrace traditional statistical methods rather than experimenting with these new techniques, even though the data structure does not meet certain parametric assumptions. Three factors contribute to this conservative practice. First, newer methods are generally not included in statistics courses, and as a result, the concepts of these newer methods seem obscure to many people. Second, in the past most software developers devoted efforts to program statistical packages for conventional data analysis. Even if researchers are aware of these new techniques, the limited software availability hinders them from implementing them. Last, even with awareness of these concepts and access to software, some researchers hesitate to apply "marginal" procedures. Traditional procedures are perceived as founded on solid theoretical justification and empirical substantiation, while newer techniques face harsh criticisms and seem to be lacking theoretical support.
## What is resampling?Classical parametric tests compare observed statistics to theoretical sampling distributions. Resampling is a revolutionary methodology because it departs from theoretical distributions. Rather, the inference is based upon repeated sampling within the same sample, and that is why this school is called resampling.
## Types of resamplingThere are at least four major types of resampling. Although today they are unified under a common theme, it is important to note that these four techniques were developed by different people at different periods of time for different purposes.
## Software for resampling
Although the standard errors from the simulations are larger than the observed standard errors, both the observed beta weights of X
## Rationale of supporting resamplingSupporters of resampling have raised a number of reasons to justify the aforementioned techniques:
## Criticisms of resamplingDespite these justifications, some methodologists are skeptical of resampling for the following reasons:
## Resampling, probabilistic inference, and counterfactual reasoningCan findings resulting from resampling be considered probabilistic inferences? The answer depends on the definition of probability. In the Fisherian tradition, probability is expressed in terms of relative long run frequency based upon a hypothetical and infinite distribution. Resampling is not considered a true probabilistic inference if the inferential process involves bridging between the sample and theoretical distributions. However, is this mode of inference just a convention? Why must the foundation of inference be theoretical distributions? Fisher asserted that theoretical distributions against which observed effects are tested have no objective reality, "being exclusively products of the statistician's imagination through the hypothesis which he has decided to test." (Fisher, 1956, p.81). In other words, he did not view distributions as outcomes of empirical replications that might actually be conducted (Yu & Ohlund, 2001). Lunneborg (2000) emphasized that resampling is a form of "realistic data analysis" (p.556) for he realized that the classical method of comparing observations to models may not be appropriate all the time. To be specific, how could we justify collecting empirical observations but using a non-empirical reference to make an inference? Are the t-, F-, Chi-square, and many other distributions just from a convention, or do they exist independently from the human world, like the Platonic reality? Whether such distributions are human convention or the Platonic metaphysical being, the basis of distributions remains the same: distributions are theoretical in nature, and are not yet empirically proven. As a matter of fact, there are alternative ways to construe probability. According to von Mises (1928/1957, 1964) and Reichenbach (1938), probability is the empirical and relative frequency based upon a finite sample. Other school of thoughts conceive probabilistic inferences in a logical or subjective fashion. Downplaying resampling by restricting the meaning of probabilistic inference to a certain school of probability may be counter-productive.
## ConclusionMore than a decade ago, Noreen (1989), who is an advocate of resampling methods, made this optimistic prediction: "The next few years are likely to be an exciting period for those involved in testing hypotheses. Recent dramatic decreases in the costs of computing now make revolutionary methods for testing hypothesis available to anyone with access to a personal computer" (p.1). However, this anticipation was largely unfulfilled during the early 1990s. One possible explanation is that at that time data analysts had to write their own programs in BASIC, PASCAL, and FORTRAN in order to perform resampling procedures. But as illustrated earlier, today resampling features are easily accessible in mainstream statistical software applications, and thus the hope of entering an "exciting period" of data analysis seems to be more realistic. More importantly, resampling does not completely depart from conventional methods. Philosophically speaking, both of them are indeed built upon the foundation of counterfactual reasoning, which is inherent in Fisherian experimental design and hypothesis testing. Today the obstacles in computing resources and mathematical logics have been removed. Perhaps now researchers could pay more attention to philosophical justification of resampling.
## Notes1. In this swapping process the order of observations doesn't matter. The researcher doesn't care whether the order in one group is "Jody, Candy, Andy" (JCA) or "Candy, Jody, Andy" (CJA). For this reason, the term "permutation" is viewed as a misnomer by some mathematicians. In mathematics, there is a difference between
## ReferencesAng, R. P. (1998). Use of Jackknife statistic to evaluate result replicability. The author may be reached at: Chong Ho Yu, Ph.D. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||

Descriptors: Resampling; Bootstrap; Jackknife; Inference; Counterfactual; Statistical Methods |