GuidesRecipesAPI ReferenceChangelogDiscussions
Log In

Python

Quickstart tutorial for Modzy's Python SDK

Modzy's Python SDK provides a convenient wrapper around many of Modzy's most popular API routes. SDK functions include querying models, submitting inference jobs, and returning results directly to your IDE or Jupyter notebook.

Installation

Use the package manager pip install the SDK:

pip install modzy-sdk

You can also access, fork, and contribute to the source code for this SDK on Github.

Usage

Retrieve your API key

You can find and download your API key in your user profile. Click on your name in the menu at the top left of the screen and then click Profile & API Keys. Clicking "Get key" will download a text file with both the public and private portions of your API key. Store this file somewhere safe as you can only download it once.

Initialize

With one simple line of code a client to Modzy's API is created

from modzy import ApiClient
client = ApiClient(base_url=API_URL, api_key=API_KEY)

Replace BASE_URL with the URL of the instance of Modzy you're using, such as https://trial.app.modzy.com/api and replace API_KEY with the API key string you just downloaded.

Basic usage

Now let's submit our first job. Choose a model and version from the available algorithms in your Modzy Library. Click on the API tab to retrieve the model identifier from the model's bio page, or choose "Python (Modzy SDK)" from the Sample Request dropdown and copy the code listed.

2116

In this example, we'll use the Sentiment Analysis model version 1.0.1 with identifer ed542963de:

sources = {}

sources["my-input"] = {
    "input.txt": "The Modzy API is very easy to use and the documentation is delightful",
}

job = client.jobs.submit_text("ed542963de", "1.0.1", sources)

Modzy runs inferences asynchronously, so this code adds a job to the queue using the data in the sources dictionary. To retrieve the computed result we will use the simple Get Results method. The average latency for the Sentiment Analysis model is 491ms, so the resulting analysis will be available nearly immediately, but for longer running models, the client.results.block_until_complete() may be useful.

result = client.results.get(job)

You can explore the full results including metadata like queueTime, modelLatency, and drift, or you can directly access the results of the inference (notice the "my-input" key matches the sources["my-input"] used on submission):

print(result["results"]["my-input"]["results.json"]["data"]["result"])
ApiObject({
  "classPredictions": [
    {
      "class": "neutral",
      "score": 0.611
    },
    {
      "class": "positive",
      "score": 0.389
    },
    {
      "class": "negative",
      "score": 0.0
    }
  ]
})

And that is it: you have run an inference using Modzy's Python SDK! Next you should try submitting a file for analysis or running a batch job with many inputs.