How to Use Explainability


Explainability for image and text classification

The following how-to guide is specific to image and text classification models. Check back soon for more details on using explainability on other types of models.

:information-desk-person: Requesting an explanation of your model's prediction

To request a prediction explanation when submitting a job to Modzy, add "explain": "true" to the end of your job submission object as shown in the example below. This will only work for models that comply with Modzy's explainability requirements. See Explainability Formats for more details.

// Requesting an explanation of a prediction from a sentiment analysis model
  "model": {
    "identifier": "ed542963de",
    "version": "1.0.1"
  "input": {
    "type": "text",
    "sources": {
      "us-travelers-are-back": {
         "input.txt": "This strong desire to travel has driven new trends in the industry — some of which may be here to stay. Like Burglewskis family, people are flocking to outdoor activities, rural areas and private vacation rentals, with less interest in hotels and international and urban destinations."
  "explain": "true"

If your model supports explainability, then Modzy will return an additional explanation object in the API response, similar to the example below.


Explainability can be slow

Generating explainable results from machine learning models requires additional computation which can sometimes slow a model down. Keep that in mind when you're deciding when to request explainability and when not to.

:art: Viewing your model's explanations

To view the results of your model's explanations, navigate to the Operations section in the top header, and then click on the the Explainability tab. You should see a table of all explainable results that have been requested by your team. Clicking on any of these rows will take you to that prediction's explanation.


List of all explainable jobs run within your team

:camera: Computer vision explanations

Explanations for computer vision model predictions appear as a mask of pixels overlaid on top of the image. These pixels identify the portions of the image that were most influential in generating the top classification.


Explanation of a computer vision model prediction

:notebook-with-decorative-cover: NLP explanations

NLP explanations appear as highlights on key words or phases. A color coded key is provided to identify the most and least influential words in generating the top classification. NLP explanations are also visible in bar chart format. You can change to this view by clicking on the Word Scores tab.


Explanation of an NLP model prediction