Model Drift
Definition
In practice, model drift is the gradual degradation of an AI model's performance over time as real-world conditions change in ways the model was not trained for. It can appear as declining accuracy in intent detection, weaker routing precision, more hallucinations, or responses that no longer reflect current policy or product reality. The model itself has not changed — the world around it has, and the gap between them grows over time.
Example
A contact center deploys a routing model trained on historical interaction data. Six months later, the company releases a new product line and updates several support workflows. The model was not retrained to reflect these changes. Slowly, escalation rates rise for the new product category, and routing accuracy drops as customers ask about things the model has no reliable context for. The team initially attributes this to increased volume, but observability data shows that a specific category of contacts is consistently misrouted. Retraining and refreshing knowledge sources corrects the behavior.
Why It Matters
This shows up as a maintenance requirement for any AI system in production. No model stays accurate indefinitely when the business, products, policies, and customer language keep changing. Teams that monitor for drift and respond to it proactively maintain the reliability needed to operate AI at scale. Those that do not treat AI as a set-and-forget system typically see performance erode gradually and invisibly until the impact becomes operationally significant.