Requires proficiency in Docker and gRPC
Only follow this model packaging guide if your model can not be containerized by Chassis or if you are proficient in Docker and gRPC.
In this guide, we will prepare a machine learning model for deployment to Modzy using our R Model template. This template will allow you to create an OMI-compliant container that can be imported directly into Modzy.
This containerization process includes three steps:
- Construct Model Container
- Construct Metadata Configuration File
- Test and Validate Model Container
Edit skeleton template scripts and Dockerfile to build a Modzy-compatible Docker container image that contains all model code and dependencies.
Migrate your existing R model library
Navigate to the
model_lib/src/model.R file within the repository, which contains the Modzy Model Wrapper Class,
r_model_class. This wrapper class is an R S4 class implementation of the
ExampleModel python class in the grpc Python Model Template. Proceed to fill out the
handle_single_input() method by following the instructions provided in the comments for the module.
- Complete the
handle_discrete_input_batch()method in order to enable custom batch processing for your model.
- Refactor the
r_model_classclass name in order to give your model a custom name. You will need to do so here.
This template leverages much of the python code in the grpc Python template and the rpy2 utility package which serves as a bridge between the Python and R languages. As a result, the Dockerfile for this template uses an rpy2 base image, which includes the necessary requirements to run R code. When editing this file, only add additional R packages below this line as required by your model code:
RUN R -e "install.packages('<your R package>',dependencies=TRUE, repos='http://cran.rstudio.com/')"
The rest of the Dockerfile should remain untouched, as it installs the python packages needed for the gRPC component of this template to work.
Fill in YAML configuration file from template that contains important metadata the API uses to run the model on the Modzy Platform.
Provide model Metadata
Create a new version of your model using semantic versioning,
x.x.x, and create a new directory for this version under
asset bundle. Fill out a
docker_metadata.yaml file under
asset_bundle/x.x.x/ according to the proper specification and then update the
__VERSION__ = x.x.x variable located in
grpc_model/__init__.py prior to performing the release for your new version of the model. Also, you must update the following line in the
Dockerfile: COPY asset_bundle/x.x.x ./asset_bundle/x.x.x/
Add unit tests and run validation tests on model container analogously to the way the Modzy Platform will spin up the model container and run inference.
First, create a virtual environment (venv, conda, or other preferred virtual environment), activate it, and install any pip-installable packages defined in your
requirements.txt file. Using venv:
python -m venv ./grpc-model source grpc-model/bin/activate pip install -r requirements.txt
python -m venv .\grpc-model .\grpc-model\Scripts\activate pip install -r requirements.txt
Then, test your gRPC server and client connection in two separate terminals. In your first terminal, kick off the gRPC server.
python -m grpc_model.src.model_server
You will see your model instantiate and begin running on port 45000. This runs the
Status() remote procedure call. Next, after properly configuring the
grpc_model.src.model_client.py file, run the gRPC client in a separate terminal.
python -m grpc_model.src.model_client
This will run the custom gRPC client and execute the
Run() remote procedure call with the data you defined in the client script. Pending a successful inference run, you can move on to testing your model inside a newly build Docker container.
Build your container, and spin up your model inside the container:
docker build -t <container-image-name>:<tag> . docker run --rm -it -p 45000:45000 <container-image-name>:<tag>
Then, in a separate terminal, test the containerized server from a local gRPC model client:
python -m grpc_model.src.model_client
Pending a successful local client test, you can proceed knowing your model container runs as expected.
Congratulations! You have now successfully containerized your your Python model. To deploy your new model container to Modzy, follow the Import Container guide.
Updated about 1 month ago