Training, Validation and Testing datasets

From Machine Learning
Jump to: navigation, search

The purpose of validation and test datasets is to assess the skill and accuracy of a machine learning model once it has learned from the training dataset [1].

Visualisation of dataset split ratios

The validation dataset is used to evaluate the model by tuning the model hyperparameters. It therefore affects the performance of the model only indirectly, as the operator assesses and changes the hyperparameters rather than changing the parameters of the model.

The testing dataset is only used once the model has been trained using the train and validation datasets. It is usually well curated, with data that spans the variety of cases that the model would come across in the intended application. In coding competitions, the testing set is often only revealed near the deadline, and the model’s performance is based on the testing dataset. Overfitting to the training set or validation set will lead to poorer performance on the testing set.

Sampling

Sampling from a dataset provides several benefits compared to working with a complete dataset, including reduced computational cost and increased speed. It is also a useful technique where a larger dataset is unavailable or incomplete[2].

The purpose of sampling is to estimate the characteristics of a whole population based on a small sample.

Sampling with replacement is where an element has a chance of being selected more than once from a dataset, since once it has been selected and used or measured, it is returned to the original dataset before the next element is randomly selected. The probability of selecting any particular element at random does not change.

The opposite is sampling without replacement, where once an element is removed from the dataset, there is no chance of selecting it again. The probability of selecting any particular element reduces as more samples are taken.

This website provides a good discussion on the differences of sampling with and without replacement.

k-fold cross validation

Cross validation can help avoid overfitting a model

Cross validation is a resampling procedure used to assessing the model on a limited dataset. The k parameter is simply the number of groups that a given data sample is split into.

Overfitting

Overfitting is an issue that may come up if the training or validation datasets are too limited. Some algorithms are more susceptible to overfitting than others, and this needs to be looked out for.

Overfitting.jpg

Overfitting is where the model works perfectly for the dataset it is trained on but will suffer and be highly inaccurate when given other datasets to make predictions from.


References