Monitor models in production

Building a great ML service is difficult. Managing it in production is doubly so.

If your team is in the trenches using ML in production, take this
quiz, to measure how you are doing and get ideas for improvements.

MetricRule works with your model serving stack to automatically create metrics for your services’ inputs and outputs, so you can track and get alerted on bad model deployments, feature drifts, or unexpected data.

Demo

Interact with a live demo of a dashboard powered by MetricRule metrics here.

This demo runs MetricRule against an example model to predict whether a pet will be adopted or not, trained on the PetFinder dataset, with simulated production traffic.

Features

  • Agents to create feature metrics on your deployed ML models. Get real-time data on what production features and predictions look like.

  • Pluggable into standard software observability tools (e.g Prometheus, Grafana).

  • Open source. Hosted version coming soon.


Interested to know more? Have feedback or feature requests? Get in touch with us here:


Why granular monitoring of features?

  • Poor quality model outputs are not diagnosed as system errors but have user experience and revenue impact

  • Poor model performance can be restricted to specific slices of inputs and outputs which are not apparent on global views

  • Shifts in user behavior in response to external events or internal product changes can affect model outcomes


Interested in email updates?

Sign up here to get very occasional updates from us. Don’t worry, we hate spam too.