To test multiple strategies within the same project, click “+ New Model Branch”.
This action starts a new model preparation. For example, you may have selected a certain set of variables in one branch, but now you want to test a different selection, or apply additional constraints to the same selection to see their impact.
💡 Tip: Use branching to compare regulatory-compliant models with unrestricted models. For example, create one branch excluding variables banned by regulation (production-ready), and another including all variables (internal analysis only). This helps you understand the full impact of regulatory constraints on model performance.
You can also refit a model with the same smoothness parameter and/or use all the variables from another model within the same project. This window will appear when you click on the button “generate models”:
This feature helps accelerate experimentation, allowing you to build on existing smoothness settings without having to re-run a grid search exploration.
⚠️ Differences on K-Fold values
Although the coefficients remain identical, the validation metrics on the K-Fold may differ slightly between the model created via grid search and the one refitted using the Use All Variables option.
This difference arises from the variable selection process during fitting:
In Grid Search, variable selection can vary across folds, meaning that different subsets of variables might be chosen during cross-validation.
In Use All Variables, the same set of variables is used across all folds, resulting in a consistent feature set.
You can also run a grid search by selecting the desired parameters. Doing so does not delete or overwrite any models created in other branches.
New Tagged models are still sent back to the grid search, but only within the grid search of the specific branch they belong to.
If you want to compare tagged models across different branches, go to the Leaderboard at the top of the Model Tree window.
The Leaderboard presents models in a table format, as having multiple branches can result in a large number of tagged models that are difficult to visualize effectively in a graph format.
You can also pin models to keep them at the top of the list.
For example, if you’re interested in specific models from multiple strategies, pinning them lets you easily track and compare them. Pinned models persist even after you leave the screen. This approach is especially useful when many models are tagged for process purposes but you’re only interested in a select few.



