Model Drift


Models are trained on a specific dataset. As time passes, models may be requested to run inference on data that differs from this dataset and the data the model was trained on may change. This makes model predictions less accurate over time which causes a data drift. The model’s evaluation metrics start to score lower.

When this happens, a new version of the model, trained on a new dataset, should replace the one before. By monitoring the distribution of data feeding a model and detecting when it changes, you can learn if predictions are different than normal, faster, and identify why models are making wrong predictions.

To find the drift page, navigate to the "Operations" top header, and click on the "Drift" section. Here you will find all of the models that have been identified for review within your team. Click into each one of them to set drift configuration and view details.

Drift home pageDrift home page

Drift home page

Understanding Drift

Modzy uses a typical Chi² implementation X² = sum(O-E)² / E. More detail about the equation can be read here. For hypothesis testing we use lookup tables below 50 degrees of freedom and a normal approximation for models with more than 50 degrees of freedom, where the degrees of freedom are equal to one less than the number of recognized classes by the model. If a user is interested in the pValue, it is simply pValue = 1 - driftValue, where driftValue is the value displayed in the chart.

Drift visualizationDrift visualization

Drift visualization

Configuring Parameters

To start tracking drift for your models, you will first need to set a Baseline Period and Thresholds for your model. The baseline should specify a period of time when the data was regular or typical, data against which all incoming data can be compared. This period can be set using the calendar date picker or by directly typing a date into the beginning and ending boxes. The system will display all completed inferences during that period – for best results this should be greater than 50 and include at least 5 of each category being observed for drift.

Date Range SelectorDate Range Selector

Date Range Selector

Thresholds are used for reporting at what point drift falls into a Nominal, Medium or High range and renders a warning. You will be notified of potential model drift on your home dashboard as well as on the primary drift page. Configure these values by sliding the two buttons on the bar, or typing your values directly into the boxes.

Threshold SelectorThreshold Selector

Threshold Selector

Did this page help you?