client.models.deploy

Deploy model programmatically to your Modzy instance

client.models.deploy(self, container_image, model_name, model_version, sample_input_file=None, architecture="amd64" credentials=None, model_id=None, run_timeout=None, status_timeout=None, short_description=None, tags=[], gpu=False, long_description=None, technical_details=None, performance_summary=None, performance_metrics=None, input_details=None, output_details=None, model_picture=None)

Deploy new model or new version for existing model to your private model library.

🚧

Note: This method only supports deployment of container images stored in Docker registries that adhere to the HTTP API v2 protocol.

Parameters

ParameterTypeDescriptionExample
container_imagestrDocker container image to be deployed. This string should represent what follows a docker pull command'modzy/grpc-echo-model:1.0.0'
model_namestrName of model to be deployed'Echo Model'
model_versionstrVersion of model to be deployed'0.0.1'
sample_input_filestrPath to local file to be used for sample inference'./test-input.txt'
architecturestr`{'amd64', 'arm64', 'arm'} If set to arm64 or arm, deploy method will expedite the deployment process and bypass some Modzy tests that are only available for models compiled for amd64 chips.
credentialsdictDictionary containing credentials if the container image is private. The keys in this dictionary must be ["user", "pass"]Dockerhub Example:
{"user": "<Dockerhub account username>", "pass": "<Dockerhub account password>"}

AWS ECR Example:
{"user": "<AWS Access Key ID>", "pass": "<AWS Secret Access Key>"}
model_idstrModel identifier if deploying a new version to a model that already exists'y79drwaozn'
run_timeoutstrTimeout threshold for container run route'60'
status_timeoutstrTimeout threshold for container status route'60'
short_descriptionstrShort description to appear on model biography page'This model returns the same text passed through as input, similar to an "echo".'
tagsListList of tags to make model more discoverable in model library["Text", "Language"]
gpuboolFlag for whether or not model requires GPU to runFalse
long_descriptionstrDescription to appear on model biography page"Long Description"
technical_detailsstrTechnical details to appear on model biography page. Markdown is accepted"Technical Details"
performance_summarystrDescription providing model performance to appear on model biography page"Performance Summary"
performance_metricsListList of arrays describing model performance statistics\[ { "label": "Accuracy", "category": "numeric", "type": "percentage", "description": "The average of the classification accuracies across all classes.", "order": 1, "value": 0.96 } ]
input_detailsListList of dictionaries describing details of model inputs[ { "name": "input", "acceptedMediaTypes": "application/json", "maximumSize": 1000000, "description": "Default input data" } ]
output_detailsListList of dictionaries describing details of model outputs[ { "name": "results.json", "mediaType": "application/json", "maximumSize": 1000000, "description": "Default output data" } ]
model_picturestrFilepath to image for model card page"path-to-image.jpg"

Returns

{
    "model_data": {
        "version": "0.0.1",
        "createdAt": "2022-08-16T01:10:52.821+00:00",
        "updatedAt": "2022-08-16T01:10:53.498+00:00",
        "inputValidationSchema": "",
        "timeout": {
            "status": 60000,
            "run": 60000
        },
        "requirement": {
            "requirementId": 1
        },
        "containerImage": {
            "uploadStatus": "IN_PROGRESS",
            "loadStatus": "IN_PROGRESS",
            "uploadPercentage": 0,
            "loadPercentage": 0,
            "containerImageSize": 0,
            "repositoryName": "thjg0zuntf"
        },
        "inputs": [
            {
                "name": "input",
                "acceptedMediaTypes": "application/json",
                "maximumSize": 1000000,
                "description": "Default input data"
            }
        ],
        "outputs": [
            {
                "name": "results.json",
                "mediaType": "application/json",
                "maximumSize": 1000000,
                "description": "Default output data"
            }
        ],
        "statistics": [],
        "isActive": false,
        "longDescription": "Long Description",
        "technicalDetails": "Techincal Details",
        "isAvailable": true,
        "status": "partial",
        "performanceSummary": "Performance summary",
        "model": {
            "modelId": "thjg0zuntf",
            "latestVersion": "0.0.1",
            "latestActiveVersion": "",
            "versions": [
                "0.0.1"
            ],
            "author": "Integration",
            "name": "Echo Model",
            "description": "Short Description",
            "permalink": "thjg0zuntf-integration-echo-model",
            "features": [],
            "isActive": false,
            "isRecommended": false,
            "isCommercial": false,
            "tags": [],
            "createdByEmail": "[email protected]",
            "createdByFullName": "First Last",
            "visibility": {
                "scope": "PRIVATE"
            }
        },
        "processing": {
            "minimumParallelCapacity": 0,
            "maximumParallelCapacity": 1
        },
        "originSidecar": false
    },
    "container_url": "https://modzy-instance.app.modzy.com/models/thjg0zuntf/0.0.1"
}

Examples

model_data = client.models.deploy(
    container_image="modzy/grpc-echo-model:1.0.0",
    model_name="Echo Model",
    model_version="0.0.1",
    sample_input_file="./test.txt",
    run_timeout="60",
    status_timeout="60",
    short_description="This model returns the same text passed through as input, similar to an 'echo.'",
    long_description="This model returns the same text passed through as input, similar to an 'echo.'",
    technical_details="This section can include any technical information abot your model. Include information about how your model was trained, any underlying architecture details, or other pertinant information an end-user would benefit from learning.",
    performance_summary="This is the performance summary."
)