General Chassis Template
Follow this template if your model framework of choice is not yet listed
This page serves as a generic guide for automating the containerization of your machine learning model using Chassis.ml. Follow this guide if the framework you are working with is not listed here. You will notice that generic pseudo code is used throughout. You are expected to replace these chunks with your model code where appropriate.
What you will need
- Dockerhub account
- Connection to running Chassis.ml service (either from a local deployment or via publicly-hosted service)
- Trained model
- Python environment
Set Up Environment
Create a Python virtual environment and install the Python packages required to load and run your model. At a minimum, pip install
the following packages:
pip install chassisml modzy-sdk
Depending on the framework or model development process you are working with, ensure your virtual environment includes all required dependencies.
Load Model into Memory
If you plan to use the Chassis service, you must first load your model into memory. If you have your trained model file saved locally (.pth
, .pkl
, .h5
, .joblib
, or other file format), you can load your model from the weights file directly, or alternatively train and use the model object.
model = framework.load("path/to/model.file")
Define process
Function
process
FunctionYou can think of this function as your "inference" function that will take input data as raw bytes, process the inputs, make predictions, and return the results. This method is the sole parameter required to create a ChassisModel
object.
def process(input_bytes):
# preprocess
data = preprocess(input_bytes)
# run inference
predictions = model.predict(data)
# post process predictions
formatted_results = postprocess(predictions)
return formatted_results
Create ChassisModel
Object and Publish Model
ChassisModel
Object and Publish ModelFirst, connect to a running instance of the Chassis service - either by deploying on your machine or by connecting to the publicly hosted version of the service. Then you can use the process
function you defined to create a ChassisModel
object and publish your model.
chassis_client = chassisml.ChassisClient("http://localhost:5000")
chassis_model = chassis_client.create_model(process_fn=process)
dockerhub_user = <my.username>
dockerhub_pass = <my.password>
response = chassis_model.publish(
model_name="Sample ML Model",
model_version="0.0.1",
registry_user=dockerhub_user,
registry_pass=dockerhub_pass
)
job_id = response.get('job_id')
final_status = chassis_client.block_until_complete(job_id)
When this job completes and you can print the final_status
object, you have now successfully completed the packaging of your model. In your Dockerhub account, you should see your new container listed in the "Repositories" tab.
Figure 1. Example Chassis-built Container
Congratulations! In just minutes you automatically created a Docker container with just a few lines of code. To deploy your new model container to Modzy, follow one of the following guides:
Updated 6 months ago