GuidesRecipesAPI ReferenceChangelogDiscussions
Log In

1. Package Model

👍

Self-Service Tutorial Contents

  1. :arrow-right: Package Model
  2. Deploy Model
  3. Scale Model Up
  4. Run Model Inference
  5. Set Drift Baseline
  6. Deploy Model to Edge Device

Prepare your environment

In this first tutorial of our end-to-end Modzy tutorial series, we will begin by containerizing a pre-trained model. To do so, we will leverage a convenient open-source tool called Chassis.

📘

What you'll need for this tutorial

  • A Python environment (Python >= 3.6)
  • Docker Desktop installed and running
  • We recommend following this tutorial in a Jupyter notebook, but any IDE will work

After your Python environment is set up, create a virtual environment (venv, conda, or your virtual environment of choice), and install Jupyter Notebooks using the appropriate install instructions.

First, use pip to install the latest version of chassis.ml:

pip install "chassisml[quickstart]"

Next, use pip to install the following packages used by this model:

pip install torch transformers numpy

With your environment set up, open a Jupyter Notebook kernal from your terminal.

jupyter notebook

The remainder of this tutorial will be executed within this notebook.

Download model from Hugging Face

In this tutorial, we will take advantage of the Hugging Face model library and package a TinyBERT text classification model.

To start, download the model and save it to your machine by adding this code snippet to your notebook:

# import packages
import time
import json
import torch
import numpy as np
from transformers import BertTokenizer, BertForSequenceClassification
from chassisml import ChassisModel
from chassis.builder import DockerBuilder

# download TinyBERT model and tokenizer
tokenizer = BertTokenizer.from_pretrained("gokuls/BERT-tiny-emotion-intent")
model = BertForSequenceClassification.from_pretrained("gokuls/BERT-tiny-emotion-intent")

# save model locally so we can use/access it with Chassisml package
tokenizer.save_pretrained("./tiny-bert-model")
model.save_pretrained("./tiny-bert-model")

# create sample text input and save it as a text file
text_file = open("input.txt", "w")
n = text_file.write("This is my first time using Modzy!")
text_file.close()

Prepare model for Chassis

Now that our pre-trained model is downloaded from Hugging Face, we will format our model in a format to package up with Chassis.

Copy the below code snippet into your notebook to load the model into memory, define labels, and create an inference function we will call process.

# create labels to use in process function
labels = model.config.id2label
mapped_labels = {"LABEL_0": 'sadness',"LABEL_1": 'joy',"LABEL_2": 'love',"LABEL_3": 'anger',"LABEL_4": 'fear',"LABEL_5": 'surprise'}

# load model to memory
tinybert_tokenizer = BertTokenizer.from_pretrained("./tiny-bert-model")
tinybert_model = BertForSequenceClassification.from_pretrained("./tiny-bert-model")

# define predict function that will serve as our inference function
def predict(input_bytes):
    # decode and preprocess data bytes
    text = input_bytes["input.txt"].decode()
    inputs = tinybert_tokenizer(text, return_tensors="pt")
    
    # run preprocessed data through model
    with torch.no_grad():
        logits = tinybert_model(**inputs).logits
        softmax = torch.nn.functional.softmax(logits, dim=1).detach().cpu().numpy()
        
    # postprocess 
    indices = np.argsort(softmax)[0][::-1]
    results = {
        "data": {
            "result": {
                "classPredictions": [{"class": mapped_labels[labels[i]], "score": softmax[0][i].item()} for i in indices]
            }
        }
    }
    
    return {'results.json':json.dumps(results).encode()}

Create Chassis model and test model

Next, create a ChassisModel object, add required dependencies, and define all needed metadata. Then test your model.

# create chassis model object, add required dependencies, and define metadata
chassis_model = ChassisModel(process_fn=predict)
chassis_model.add_requirements(["torch", "transformers", "numpy"])
chassis_model.metadata.model_name = "Hugging Face TinyBERT (AMD)"
chassis_model.metadata.model_version = "1.0.0"
chassis_model.metadata.add_input(
    key="input.txt",
    accepted_media_types=["text/plain"],
    max_size="10M",
    description="Text for model to classify into one of 6 emotions."
)
chassis_model.metadata.add_output(
    key="results.json",
    media_type="application/json",
    max_size="1M",
    description="JSON containing emotion class and corresponding confidence score"
)

# test model
results = chassis_model.test({"input.txt": open("input.txt", "rb").read()})
print(results)

If successful, you should see an output that looks like this in your notebook:

b'{"data":{"result":{"classPredictions":[{"class":"joy","score":0.9988540410995483},{"class":"sadness","score":0.0006223577074706554},{"class":"love","score":0.00022895698202773929},{"class":"surprise","score":0.0001073237945092842},{"class":"anger","score":0.0001029252671287395},{"class":"fear","score":8.438550867140293e-05}]}}}'

Build model container

# build container # 
builder = DockerBuilder(chassis_model)
start_time = time.time()
res = builder.build_image(name="hugging-face-tinybert", tag="1.0.0", show_logs=True)
end_time = time.time()
print(res)
print(f"Container image built in {round((end_time-start_time)/60, 5)} minutes")

Assuming the build process proceeds correctly, you should expect to see an output something like this:

Generating Dockerfile...Done!
Copying libraries...Done!
Writing metadata...Done!
Compiling pip requirements...Done!
Copying files...Done!
Starting Docker build...Done!
Image ID: sha256:d222014ffe7bacd27382fb00cb8686321e738d7c80d65f0290f4c303459d3d65
Image Tags: ['hugging-face-tinybert:latest']
Cleaning local context
Completed:       True
Success:         True
Image Tag:       hugging-face-tinybert:latest

In the next tutorial, learn how to deploy your containerized model to your Modzy account!


What’s Next