Here at Modzy, we created an end-to-end integration that uses MLflow (a popular tool for ML training, tracking, and logging) to train a ML model and then automates the deployment process to the Modzy platform, thus creating an automated model deployment pipeline.
Data scientists around the world leverage MLflow to enhance and streamline their experience during the ML model development process. A data scientist might want to train an image classification model and log the accuracy on a validation dataset while testing different combinations of hyperparameters. Upon conclusion of this experiment, the data scientist will choose the set of hyperparameters that achieve the highest accuracy to tackle the challenge of moving this powerful model out of the lab and into production.
Once an ML model is trained, tested, and fine-tuned, Modzy provides a template to package these models in a Modzy-compatible Docker container. This process allows data scientists to save their model’s Docker image to a registry and upload the model to the Modzy platform. The integration built can:
Train a model, optimize its hyperparameters, select the best-performing model, and save its model artifacts—all with full access to the native MLflow platform features (UI, logging, etc.)
Send these model artifacts to a model converter that plugs them into a ready-made repository (repo) and builds a docker image that conforms to the Modzy API specification, preparing the newly minted model to perform inference
Deploy the model image and static metadata for the model to Modzy’s platform in order to make it available for use via Modzy’s marketplace
MLflow allows users to run as many training experiments as they wish—all while tracking the performance of different sets of hyperparameters. This Model Trainer portion of this integration gives the model developer the opportunity to test several hyperparameter sets during the training phase to find an optimal, best-performing model.
The output of a model training experiment is called a model artifact. Model artifacts can vary depending on the model, use case, or framework leveraged in training. Most commonly, model artifacts consist of model weights, labels, or any other configuration files required for the model to perform inference. In this integration, the model converter feeds the model artifacts (raw output of the Model Trainer) into a pre-built Github repo structured in a Modzy-compatible manner. Using this repo’s Dockerfile, the model converter then builds the Docker container image—which contains the model’s code and artifacts—and pushes this container image to a Docker registry.
After the model converter converts the model repository to a valid Docker container image, the model importer takes two inputs before uploading the model to Modzy’s platform:
- The container image
- Tar archive that contains model metadata
The model metadata archive contains all marketplace information for the model, including technical model details, performance metrics, training information, images, and real-life usage examples.
Updated about 1 year ago