## TLDR - ChannelMix AI uses K-Fold Validation to measure the performance of your model with a variety of metrics including R^2, SMAPE, and Mean Absolute Error.

Model Validation is the process of determining how well a model trained on a certain set of data (your historical data) will perform when introduced to new data points (recommendation for the future). Just because a model performs perfectly when predicting data it's already seen doesn't guarantee it will know what to do with new data.

One particular concern is overfitting - where a model effectively memorizes its training data, but not the underlying patterns within that data. In this case, you'll get a model that's terrific at predicting what will happen in the training data and terrible at predicting anything else. Also known as a completely useless model. Instead, we want a model that is able to generalize well from the training data to new data points.

A common method to measure how well a model generalizes to new data is known as Hold-Out Validation. In this process, we split all our data into two groups named: train and test. Next, we train a model using only the train dataset. This model will learn how to predict the target values (something like Leads) in the training data based on the input values (like cost) in the training data. Once the model is trained, we will ask it to produce predictions for the test dataset. This means we will provide the input values (cost) and ask the model to provide the appropriate target value (Leads). Now, we have the model's best guess for the target value based on brand new data points * and* the actual target value for those data points. From this, we can calculate all sorts of metrics to measure our model's performance on new data.

This procedure is widely accepted and it's awesome because it gives us an estimate of how well our model will generalize. But, it only gives us a single measurement and that measurement depends on which points go into the training dataset and which go into test.

ChannelMix AI utilizes an enhanced procedure to produce a more reliable measurement of the model's generalizability. This process is known as K-Fold Cross Validation.

K-Fold Validation follows the same ideas as Hold-Out Validation, except we split our data into K datasets instead of just two. It's easier to see how it works with an example. Let's say we are going to perform K-Fold Validation with K=5 (AKA 5-Fold Validation).

We start by splitting our dataset into 5 groups - group A, B, C, D, and E. Next, we will train a model using data from groups A+B+C+D and then make predictions and calculate metrics using group E. So far, this is exactly what we did in Hold-Out Validation. THe difference comes in the next step where we now train a new model using data from groups A+B+C+e and calculate metrics using group D. We repeat this process until every group has gotten its chance to be singled out and used for validation.

Now, we have 5 measurements of how well our model will generalize instead of the single measurement from Hold-Out Validation. Additionally, this should make the measurement less dependent on exactly which data points go into which dataset, because they all get a turn eventually.

The K-Fold Validation procedure is used when training any ChannelMix AI model to provide a reliable estimate of how well the model will perform on brand new data.

## Comments

Please sign in to leave a comment.