Marketing & Media Mix Model - Understanding the quality of your MMM

Just because you have a Marketing & Media Mix Model does not mean you’ve got a good Marketing & Media Mix Model. It is crucial to understand the quality of your model before blinding following it’s recommendations. ChannelMix provides multiple methods for you to understand the performance of your Marketing & Media Mix Model.

Using the Prediction Performance Section

The simplest validation can be performed by looking at the Prediction Performance section of the Validation Summary tab in your Marketing & Media Mix Model dashboard. This section shows a comparison between the number of target events that occurred last month compared to the number of target events the model predicted using your actual spend. This is explained in much more detail in the article Marketing & Media Mix Model - Validation Procedure

From here, you can get a general sense of how well your model performs at predicting target events based on spend. If you see a large difference, drill down to see the comparison at a daily level. There are a few easy things to watch out for in these plots:

1. Does my model predict a wide range of target events while I see the actual number of target events remain steady? 

Your model may have falsely identified a trend in the training data. Check your spend per channel in this time period and look for matching trends in the model outputs to identify which channel(s) are causing the difference. If you want to go further, check out your spend and target events further in the past. Do you see the trend the model is replicating? This will help you understand if the model picked up on a false trend or if your data now is following a different pattern than before.

2. Do my model predictions remain steady while the actual number of target events varies? 

Your model may not have been able to learn all the patterns in the training data or the new data contains trends not present in the training data. If the data contains a new trend, this will gradually be integrated into the model overtime as we collect more observations of this trend. If you’re able to identify the same pattern in spend and target events in your historical data, your model may need more parameters to identify the trend. Talk with us about adding new features to your model.

3. Are the predictions and actual target events generally in agreement with the exception of large spikes/drops in actual events? 

Most models will struggle with large spikes in the target variable if they aren’t accompanied by unique, large spikes in predictors (spend in the MMM case). If there is not a large change in the features a model uses to make predictions, it has no reason to predict large changes in the target variable. Generally, these spikes will be associated with a known cause - like a massive one day sales event leading to a large spike in sales. This is where you have to use your external knowledge to understand trends the model can’t understand.

Using Model Validation Metrics

Another way to validate your Marketing & Media Mix Model is by using the metrics in the Model Validation Metrics section of the dashboard. Here ChannelMix provides some common validation metrics for your model as well as a simple linear regression model fit to your data. The linear regression model is included as a baseline for comparison because it is such a simple, well-understood model. If the linear regression metrics are better than your Marketing & Media Mix Model’s metrics, there may be an issue with your model.

These metrics provide single numbers used to validate your model at a glance - specifically the R-squared metric. The higher the R-squared, the better the model. The best possible R-squared value is 1. We consider an R-squared above 0.8 to be an excellent model, above 0.6 to be an acceptable model, and less than 0.6 to require further investigation (perhaps using one of the other features in this article). 

The R-squared metric provides a single metric to inform you whether your model is ready for use or requires further investigation.

Using Your Gut

While data science models are much better than us at extracting intricate patterns from the data, we as marketers and analysts have knowledge and experience that can’t be encoded in a model. The last way to validate your model is to use common-sense and your past experience. These models can pick up on improvements you can make (like moving 2% of your Paid Social spend to Email), but they shouldn’t be trusted to completely revolutionize your strategy (like putting 100% of your budget into Billboards). If your model is telling you things that make no sense, be skeptical and ask questions.

Was this article helpful?
0 out of 0 found this helpful
Have more questions? Submit a request

Comments

0 comments

Please sign in to leave a comment.