In order to ensure that machine learning models are able to generalize well to new data not seen before by the model, is it important to have train, test, and cross-validation split for the original set of data to obtain the best possible predictive model.
When conducting machine learning, data collection is critical to generate accurate algorithms to make good predictions. A predictive model is created after undergoing training utilizing a training set of known examples.
A credible method is required to test the accuracy of the model after training. Using the same training examples for testing is unlikely to give an accurate representation of the predictive accuracy of the model as the model is likely to be biased towards the training set. Thus, the original data set is usually split to make a test set. The test set is usually used to select the algorithm with the best performance.
Selecting an algorithm based on the test set could lead to further biases. As the algorithm is selected from the best performance based on the same test set, this isn’t an accurate representation of generalized accuracy to examples never seen before by the algorithm (as a test set is finite and does not necessarily cover the wide variety of real examples). The algorithm selected will likely have an optimistic estimation of the generalization error. Consequently, the original dataset is further split to include a cross validation set. The cross validation set is used to select the best performing algorithm, and the test set is used to estimate the generalization error from this algorithm.
- data points used to train the algorithm
cross validation set
- data points used to select the best algorithm
- data points used to test the selected algorithm for the generalization error/accuracy.
A typical split of the original dataset is 60% training, 20% cross validation and 20% test set
Related Radiopaedia articles
- artificial intelligence (AI)
- imaging data sets
- computer-aided diagnosis (CAD)
- natural language processing
machine learning (overview)
- machine learning processes
- machine learning models
- visualizing and understanding neural networks
- common data preparation/preprocessing steps
- DICOM to bitmap conversion
- dimensionality reduction
- principal component analysis
- training, testing and validation datasets
- loss function
- optimization algorithms
- linear and quadratic
- batch normalization
- rule-based expert systems