Breiman L, Friedman JH, Olshen RA and Stone CJ, 1984. Classification and Regression Trees, Wadsworth.

Catlett J, 1991. “On changing continuous attributes into ordered discrete attributes”. In: Kodratoff Y (ed.), Proceedings of the European Working Session on Learning.

Cost S and Salzberg S, 1993. “A weighted nearest neighbour algorithm for learning with symbolic features”. Machine Learning1057–78.

Dasarathy BV, 1991. Nearest Neighbour (NN) Norms: NN Pattern Classification Techniques. IEEE Press.

Devijver PA and Kittler J, 1980. “On the edited nearest neighbour rule”. In: Proceedings of the Fifth International Conference on Pattern Recognition, 72–80.

Fayyad UM and Irani KB, 1993. “Multi-interval discretization of continuous valued attributes for classification learning”. In: Proceedings of the 13th International Joint Conference on Artifical Intelligence, 1022–1027, Morgan Kaufmann.

Fix E and Hodges JL, 1951. “Discriminatory analysis—nonparametric discrimination: consistency properties. Project 21–49–004, Report No. 4, USAF School of Aviation Medicine, Randolph Field, TX, 261–279.

Fix E and Hodges JL, 1952. “Discriminatory analysis—nonparametric discrimination: small sample performance. Project 21–49–004, Report No. 11, USAF School of Aviation Medicine, Randolph Field, TX, 280–322.

Hart PE, 1968. “The condensed nearest neighbour rule”. IEEE Transactions of Information Theory IT-14 (3).

Kerber R, 1992. “ChiMerge: discretization of numeric attributes”. In: Proceedings of the Tenth National Conference on Artificial Intelligence, 123–128, AAAI Press/MIT Press.

Kononenko I, Bratko I and Roskar E, 1984. “Experiments in automatic learning of medical diagnostic rules”. Technical Report. Jozef Stefan Institute, Ljubjana, Yugoslavia.

Kononenko I, 1993. “Inductive and Bayesian learning in medical diagnosis”. Applied Artificial Intelligence7317–337.

Liu WZ and White AP, 1991. “A review of inductive learning”. In: Graham IM and Milne RW (eds.), Research and Development in Expert Systems VIII, 112–126, Cambridge University Press.

Liu WZ and White AP, 1994. “The importance of attribute selection measures in decision tree induction”. Machine Learning1525–41.

Liu WZ and White AP, 1995. “A comparison of nearest neighbour and tree-based discriminant analysis. Journal of Statistical and Computational Simulation5341–50.

Quinlan JR, 1986. “Induction of decision trees”. Machine Learning181–106.

Quinlan JR, 1988. “Decision trees and multi-valued attributes”. Machine Intelligence11305–318.

Quinlan JR and Rivest RL, 1989. “Inferring decision trees using the minimum description length principle. Information and Computation80227–248.

Rachlin J, Kasif S, Salzberg S and Aha D, 1994. “Towards a better understanding of memory-based and Bayesian classifiers”. In: Proceedings of the Eleventh International Conference on Machine Learning, 242–250, New Brunswick, NJ.

Salzberg S, 1989. “Nested hyper-rectangles for exemplar-based learning”. In: Jantke KP (ed), Analogical and Inductive Inference: International Workshop A11'89, Springer-Verlag.

Salzberg S, 1990. Learning with Nested Generalized Exemplars, Kluwer Academic.

Salzberg S, 1991. “A nested hyper-rectangle learning method”. Machine Learning6 (3) 251–276.

Stanfill C and Waltz D, 1986. “Towards memory-based reasoning.” Communications of the ACM29 (12) 1213–1228.

Swonger CW, 1972, “Sample set condensation for a condensed nearest neighbour decision rule for pattern recognition”. In: Watanabe S (ed.), Frontiers of Pattern Recognition, 511–519.

Ting KM, 1994. “Discretisation of continuous-valued attributes and instance-based learning”. Technical Report, 491, Basser Department of Computer Science, University of Sydney.

Tomek I, 1976, “An experiment with the edited nearest neighbour rule”. IEEE Transactions on Systems, Man and Cybernetics. SMC-6 (6) 448–452.

Van de Merckt, T, 1993. “Decision trees in numerical attributes spaces”. In: Proceedings of the 13th International Joint Conference on Artificial Intellegence, 1016–1021, Morgan Kaufmann.

White AP, 1987. “Probabilistic induction by dynamic path generation in virtual trees”. In: Bramer MA (ed.), Research and Development in Expert Systems III, 35–46, Cambridge University Press.

White AP and Liu WZ, 1990. “Probabilistic induction by dynamic path generation for continuous attributes”. In: Addis TR and Muir RM (eds.), Research and Development in Expert Systems VII, 285–296, Cambridge University Press.

White AP and Liu WZ, 1993. “Fairness of attribute selection in probabilistic induction”. In: Bramer MA and Milne RW (eds.), Research and Development in Expert Systems IX, 209–224, Cambridge University Press.

White AP and Liu WZ, 1994. “Bias in information-based measures in decision tree induction.” Machine Learning15321–329.