Skip to main content

Model Approval System

The Model Approval system designates production-ready models with custom labels and automatic locking

Gianmarco Franzo avatar
Written by Gianmarco Franzo
Updated this week

The Model Approval system transforms the Leaderboard from a comparison tool into a comprehensive model management interface, designed to support increasingly complex workflows that may significantly increase the number of tagged models within a project.

Approving a model

The approval system enables actuaries to designate production-ready models with clear visual identification. To approve a model, go to Leaderboard and click the star icon.

Then attach a custom label and color for visual identification. The label appears adjacent to the model name throughout the platform:

The label shows up on the project page as well along with pinned models.

Branch-Level Approval

Approval operates at the branch level, supporting multiple approved models per project for use cases such as separate strategies per branch, for instance: constrained versus unconstrained model variants, or champion-challenger configurations with distinct labels like "Final" and "Challenger".

Models belonging to the validated model branch are automatically locked to prevent accidental modifications. Once approved, renaming, untagging, and enrichment operations are disabled, though branch creation remains available to support alternative workflows.

Import Flow Enhancement

When importing models into the Risk and Rate Modules, approved models appear at the top of the selection list:

Visual highlighting through labels and colors aids in rapid model identification, while the model tree interface displays approval status to reduce the risk of importing incorrect models.

The Leaderboard table consistently displays approved models first, followed by pinned models.

Project Locking

Project locking is a governance feature that allows users to freeze a project state and gain access to holdout data performance metrics. This ensures the integrity of the modeling process by preventing further modifications to model predictions.

From the project page, users can lock a project using the "Lock" button.

Any workspace member can lock a project but only workspace admins or the designated project owner can unlock a project.

When a project is locked:

  • All operations that would modify model predictions are disabled

  • Visualization operations (scaling, binning, etc.) remain available

  • The holdout dataset becomes accessible for performance evaluation, across multiple interfaces:

    • Statistics table: An additional column shows metrics calculated on the holdout dataset

    • Lorenz curves: Holdout performance can be visualized and compared

    • Compare Models: A data selector allows comparison on holdout data

    • Leaderboard: Holdout metrics appear as an additional column for all models

Note: Only performance metrics are accessible on the holdout dataset. Individual observation-level graphs (observed vs. predicted at the row level) are not available to maintain the integrity of the validation approach.

Once the project is locked, the project page displays:

  • Lock date

  • Project owner information

Did this answer your question?