Since I’ve been writing on the topic of workflows and tooling, I thought I’d give one of my favourite packages, guild.ai, a shout-out, and well-deserved exposure.
Guild.ai is designed for managing machine learning experiments. It provides ways to track experiment runs, make runs reproducible, and compare runs with different hyperparameters, all in ways that are non-intrusive. Those who know me know I care a lot about my tools, and this is one of those tools that really spark joy. It’s no surprise that the author of this package, Garrett Smith, comes from a systems engineering background. Garrett is also highly responsive on Slack, actively listening to the users, and provides support.
While there is no shortage of machine learning tooling, guild.ai takes a minimal and un-intrusive approach towards instrumenting the training scripts. Take Sacred for example. It requires instrumenting the Python training script with decorators:
from sacred import Experiment ex = Experiment('iris_rbf_svm') @ex.config def cfg(): C = 1.0 gamma = 0.7 @ex.automain def run(C, gamma): iris = datasets.load_iris() # ...
Scalar logging for Sacred is also a custom module within Sacred, and visualizing these scalars is done through custom frontends like Omniboard.
Guild.ai cleanly uses default Python functionality, allowing one to
get started without any modification to training scripts. This also
makes guild.ai a nice, non-committal choice.I use command-line
argparse as configuration flags. Guild.ai also has
first-class support for Tensorboard, which is a popular and powerful
Experiment Tracking with Guild.ai
Each run of the training script is tracked, by running them through the intuitive guild CLI. The run processes are tracked by Guild.
Guild even ships with a beautiful web interface that allows you to
view run files (model checkpoints, logs etc.), accessible via
guild tensorboard -C to load Tensorboard for all my completed
jethro@server1-CLeAR: ~/projects/snnrl$ guild tensorboard -C Preparing runs for TensorBoard TensorBoard 2.1.0 at http://server1-CLeAR:52675 (Press CTRL+C to quit)
It even supports the Tensorflow HParam view, which is designed for comparing runs with differing hyperparameters.
Guild.ai supports a SQL-like filtering interface to allow the user to choose which runs to load and compare.
Guild.ai supports multiple methods for hyperparameter search:
- Grid search
- Random search
- Bayesian Optimization
This can be specified easily via the CLI:
guild run train.py x=[-0.5,-0.4,-0.3,-0.2,-0.1]
Hyperparameter search and reproducibility is especially important in Reinforcement Learning, and guild.ai has made this extremely simple.
Since I’m using guild.ai within a research project, I’m not using these features. But Guild.ai supports packaging, allowing for model collaboration, sharing and reuse.
I’m pleased with guild.ai as is, and the author is actively working on bringing improvements to the tool. I highly recommend machine learning practitioners to give it a whirl.