How to Deploy a Model with Chassis

Follow this template to automatically deploy your machine learning model

This page serves as a generic guide for automating the containerization and deployment of your machine learning model using Chassis.ml. Follow this guide if the framework you are working with is not listed here. You will notice that generic pseudo code is used throughout. You are expected to replace these chunks with your model code where appropriate.

📘

What you will need

  • Valid Modzy Credentials (instance URL and API Key, e.g., https://app.modzy.com and q4jp1pOZyFTddkFsOYwI.flHw34veJgfKu2MNzAa7)
  • Dockerhub account
  • Connection to running Chassis.ml service (either from a local deployment or via publicly-hosted service)
  • Trained model
  • Python environment

Set Up Environment

Create a Python virtual environment and install the Python packages required to load and run your model. At a minimum, pip install the following packages:

pip install chassisml modzy-sdk

Depending on the framework or model development process you are working with, ensure your virtual environment includes all required dependencies.

Load Model into Memory

If you plan to use the Chassis service, you must first load your model into memory. If you have your trained model file saved locally (.pth, .pkl, .h5, .joblib, or other file format), you can load your model from the weights file directly, or alternatively train and use the model object.

model = framework.load("path/to/model.file")

Define process Function

You can think of this function as your "inference" function that will take input data as raw bytes, process the inputs, make predictions, and return the results. This method is the sole parameter required to create a ChassisModel object.

def process(input_bytes):
  # preprocess
  data = preprocess(input_bytes)

  # run inference
  predictions = model.predict(data)

  # post process predictions
  formatted_results = postprocess(predictions)

  return formatted_results

Create ChassisModel Object and Publish Model

First, connect to a running instance of the Chassis service - either by deploying on your machine or by connecting to the publicly hosted version of the service. Then you can use the process function you defined to create a ChassisModel object and publish your model.

chassis_client = chassisml.ChassisClient("http://localhost:5000")
chassis_model = chassis_client.create_model(process_fn=process)

dockerhub_user = <my.username>
dockerhub_pass = <my.password>
modzy_url = <modzy.base.url> # E.g., https://app.modzy.com/api
modzy_api_key = <my.modzy.api.key> # E.g., 8eba4z0AHqguxyf1gU6S.4AmeDQYIQZ724AQAGLJ8

response = chassis_model.publish(
   model_name="Sample ML Model",
   model_version="0.0.1",
   registry_user=dockerhub_user,
   registry_pass=dockerhub_pass,
   modzy_url=modzy_url,
   modzy_api_key=modzy_api_key,
   modzy_sample_input_path=sample_filepath
)

job_id = response.get('job_id')
final_status = chassis_client.block_until_complete(job_id)

Executing this code will not only build and push a new container image to your Dockerhub repository, but it will also programmatically deploy the model to Modzy. However, as noted above, you have less flexibility to configure custom hardware options, and you will need to go back into your model to add documentation after the fact.

Once your model is deployed and you navigate to the new model page, click "Edit Model" to add documentation and tags to your newly deployed model. This will ensure other team members within your organization can discover this model and decide whether or not it fits their use case.

Figure 1. Edit ModelFigure 1. Edit Model

Figure 1. Edit Model

Edit the "Add documentation" section to add the following documentation to your model:

  • Description: Few sentence summary of your model. Include the task your model accomplishes and brief information about expected inputs and outputs.
  • Performance Overview: Few sentence overview of how your model was evaluated and any relevant performance metrics captured during the training and validation processes.
  • Performance Metrics: Add metrics you computed during training that will be displayed on your model home page.
  • Transparency and bias reporting: Technical details for your model, such as information about your model’s design, architecture, training data, development approach, etc.
Figure 2. Add documentationFigure 2. Add documentation

Figure 2. Add documentation

When you have finished, move on to the "Assign tags and categories" section. Adding tags will make your model more accessible and discoverable within your organization's Modzy model library.

Figure 3. Assign tagsFigure 3. Assign tags

Figure 3. Assign tags

Congratulations! In just minutes you automatically created a Docker container with just a few lines of code. To deploy of your new model container to Modzy, follow the Import Container guide.


Did this page help you?