Cross-Validation (k-fold Cross-Validation, Leave-p-out Cross-Validation)

« Back to Glossary Index

A collection of techniques used to assess how well a predictive model will generalize to new data sets.

  • k-fold Cross-Validation: A method where the data is divided into k subsets, and the model is trained on k-1 subsets while being tested on the remaining subset. This process is repeated k times, with each subset serving as the test set once.
  • Leave-p-out Cross-Validation: A variation of cross-validation where p data points are left out for testing, and the model is trained on the remaining data. This process is repeated for all possible combinations of the data points.