EdgeClient.inferences.run

Provides a synchronous way to run an inference

EdgeClient.inferences.run(model_identifier: str, model_version: str, input_sources: List[InputSource], explain=False, tags=None)

This method provides a synchronous way to run an inference. This is simply a convenience function that is equivalent to the perform_inference and block_until_complete methods run sequentially.

Parameters

ParameterTypeDescriptionExample
model_identifierstrThe model identifier.'ed542963de'
model_versionstrThe model version string in semantic version format.'1.0.1'
input_sourcesList[InputSource]A list of input sources of type InputSource[InputSource(key="input.txt", text="Today is a great day.")]
explainboolIf the model supports explainability, flag this job to return an explanation of the predictionsTrue
tagsMapping[str, str]An arbitrary set of key/value tags to associate with this inference.

Returns

A Inference object returned from Inference API

Examples

from modzy import EdgeClient
from modzy.edge import InputSource

image_bytes = open("image_path.jpg", "rb").read()
input_object = InputSource(
    key="image", # input filename defined by model author
    data=image_bytes,
) 

client =  EdgeClient('localhost', 55000)
client.connect()
inference = client.inferences.run("<model-id>", "<model-version>", input_object, explain=False, tags=None)
results = inference.result.outputs
client.close()