Recently, AdaBoost has been widely used in many computer vision applications and has shown promising results. However, it is also observed that its classification performance is often poor when the size of the training sample set is small. In certain situations, there may be many unlabelled samples available and labelling them is costly and time-consuming. Thus it is desirable to pick a few good samples to be labelled. The key is how. In this paper, we integrate active learning with AdaBoost to attack this problem. The principle idea is to select the next unlabelled sample base on it being at the minimum distance from the optimal AdaBoost hyperplane derived from the current set of labelled samples. We prove via version space concept that this selection strategy yields the fastest expected learning rate. Experimental results on both artificial and standard databases demonstrate the effectiveness of our proposed method.