Python sample from scratch
In this sample, the tutorial containerize a Sentiment Analysis model implemented by the VADER Sentiment Analysis library to use it with Modzy.
The code in this document is built with Python.
This tutorial is intended for development purposes and assumes that you already have a working code
Overview
The container setup follows these steps:
-
Writes model input to the API.
-
When it is ready, the API sends an HTTP response to the model container.
-
The model runs the inference job and posts the results to the specified location in the filesystem.
Requirements
This process requires the installation of Docker and Python 3.
Build a web service to create the API
The web application built below accepts requests at the endpoints that need to be implemented:
- GET /status
- POST /run
- POST /shutdown
Create a python virtual environment
Create a Python virtual environment in your terminal to isolate dependencies:
# linux/mac:
python3 -m venv venv
# #windows:
py -m venv venvb
Activate the virtual environment:
# linux/mac:
source venv/bin/activate
# windows:
venv\Scripts\activate
Create a web application
The Flask library allows you to set up the webserver and write the code.
Preparation
Install the Flask and VADER libraries into the virtual environment with the pip command in the terminal:
pip install flask vader-sentiment
Record a list of everything that was installed into a requirements.txt
file:
pip freeze > requirements.txt
Create a file named app.py
and open it in a text editor to implement the Flask web application.
Implementation
Import the required classes and functions from the Flask library:
from flask import Flask, abort, jsonify, request
Create a Model
class that allows the set up of the model (once) and the text analysis (anytime):
class Model:
"""A singleton for holding the single model instance."""
_instance = None
@classmethod
def get_instance(cls):
"""Get the lazily loaded model instance."""
if cls._instance is None:
from vader_sentiment.vader_sentiment import SentimentIntensityAnalyzer
cls._instance = SentimentIntensityAnalyzer()
return cls._instance
Create the Flask web application:
app = Flask(__name__)
Limit the length of the accepted content to 10Mb:
app.config['MAX_CONTENT_LENGTH'] = 10 * 1024 * 1024 # 10M
Add a GET /status API route to check if the model is ready to accept work. For the initial call, load the model with the Model class created earlier:
@app.route('/status', methods=['GET'])
def status():
"""Get the model status.
The `/status` route should do any model initialization (if needed) and return 200 success
if the model has been loaded successfully and is ready to be run, otherwise error.
"""
ok = Model.get_instance() is not None
if ok:
return ''
else:
abort(500)
Add a POST /run API route that accepts text and returns the sentiment analysis scores as JSON. This route also performs required validations on the request data.
@app.route('/run', methods=['POST'])
def run():
"""Run the model inference.
The `/run` route should accept the work payload and return the inference results, otherwise error.
"""
if request.mimetype != 'text/plain':
abort(415, 'this API only supports plain text')
text = request.get_data(as_text=True) or ''
model = Model.get_instance()
prediction = model.polarity_scores(text)
return jsonify(prediction)
Add an error handler to convert any exceptions to plain text responses:
@app.errorhandler(Exception)
def errorhandler(exception):
"""Converts any errors to text response."""
try:
code = int(exception.code)
description = str(exception.description)
except (AttributeError, ValueError):
code = 500
description = str(exception) if app.debug else 'server error'
return description, code, {'Content-Type': 'text/plain;charset=utf-8'}
Start the Flask development server to test the routes:
if __name__ == '__main__':
import argparse
import os
parser = argparse.ArgumentParser(description='development server')
parser.add_argument('--host', '-H', default=os.environ.get('FLASK_RUN_HOST'), help='host')
parser.add_argument('--port', '-p', default=os.environ.get('FLASK_RUN_PORT', 8080), help='port')
parser.add_argument('--no-debug', action='store_false', dest='debug',
default=os.environ.get('FLASK_DEBUG', True), help='turn off debug mode')
args = parser.parse_args()
app.run(debug=args.debug, host=args.host, port=args.port)
Test the application
In the terminal, start the application using the development server:
python app.py
In a separate terminal, use cURL
to check if the application responds to GET /status
with a 200 OK response:
curl -si "http://localhost:8080/status"
HTTP/1.0 200 OK
Content-Type: text/html; charset=utf-8
Content-Length: 0
Server: Werkzeug/0.14.1 Python/3.6.7
Date: Thu, 28 Mar 2019 16:20:04 GMT
Run the sentiment analysis model by posting text to POST /run:
You should receive a JSON response with the sentiment scores contained.
curl -s -X POST -H "Content-Type: text/plain" --data "This model is awesome :)" "http://localhost:8080/run"
{
"compound": 0.7964,
"neg": 0.0,
"neu": 0.297,
"pos": 0.703
}
curl -s -X POST -H "Content-Type: text/plain" --data "This model is terrible :(" "http://localhost:8080/run"
{
"compound": -0.7184,
"neg": 0.667,
"neu": 0.333,
"pos": 0.0
}
Build the Docker container
The next step is the application packaging with the necessary dependencies into a Docker container.
For more information see: https://www.docker.com/resources/what-container
Run the application inside the container
Create a simple wrapper shell script to run as the default command inside the container. The script activates the virtual environment in the container and executes the application with Gunicorn.
The application deployment process needs a production-grade web server. This example uses Gunicorn WSGI HTTP Server.
Docker associates the lifetime of a container with the lifetime of the first process that is run. Use the
exec
shell command to start the Gunicorn process.
Create an empty file named entrypoint.sh
and open it in a text editor. Write the wrapper shell script in this file, as follows:
#!/bin/sh
# this script activates the python virtual environment and executes the gunicorn web server
. venv/bin/activate
exec gunicorn -b :8080 --access-logfile - --error-logfile - app:app
Write the Dockerfile
Write the Dockerfile with the following information:
- The steps to build an image with a copy of our code and its dependencies.
- The Steps to run the application as a container.
Create an empty file named Dockerfile
and open it in a text editor. It will list the steps needed to build a Docker image.
This example uses an open-source Docker image as a base container: an extension from a Debian "Jessie" image that already has a Python installation.
FROM python:3.6-slim-stretch
Create a new appserver user within the container. It is the user that runs the application:
This step is optional. However, it is considered a best practice to run a service as a non-root user if it can run without privileges.
# add a group and user for the application server
RUN groupadd appserver \
&& useradd --gid appserver --shell /bin/bash --create-home appserver
Set WORKDIR
as the directory to hold the application code. If it doesn’t exist, create it:
WORKDIR /home/appserver/app
Copy the requirements.txt
file into the container. Create a Python virtual environment and install the library requirements into it. Then, install the Gunicorn package:
COPY requirements.txt requirements.txt
RUN python -m venv venv \
&& venv/bin/pip install --no-cache-dir -r requirements.txt gunicorn
Copy the code files for the application and entry point shell script into the container:
COPY entrypoint.sh app.py ./
Change ownership of all files in the working directory to the appserver user:
RUN chown -R appserver:appserver ./
Change the user to the appserver account that runs the application:
USER appserver
Expose the network port where the web server listens to the requests. Modzy’s platform communicates with the model via this port.
EXPOSE 8080
Define the entry point that specifies the default command executed when the container starts.
Use the entrypoint.sh
script to start the application web server. Use the "exec" form of the ENTRYPOINT
command (with the usage of square brackets).
ENTRYPOINT ["./entrypoint.sh"]
Build the Docker container
The tutorial now shows how to build the Docker image and run the containerized application.
From the terminal, build the app server image and tag it as sentiment-analysis:latest
.
docker build -t sentiment-analysis:latest .
Test with the sentiment-analysis:latest
image to run the containerized application as a daemon on port 8080. This example uses the container's name sentiment-analysis
to keep track of it. The rm
flag cleans the container’s filesystem.
docker run --name sentiment-analysis -d -p 8080:8080 --rm sentiment-analysis:latest
Check the containerized application status and run test inference jobs with the cURL
commands seen earlier to call the API.
curl -si "http://localhost:8080/status"
curl -s -X POST -H "Content-Type: text/plain" --data "This model is awesome :)" "http://localhost:8080/run"
curl -s -X POST -H "Content-Type: text/plain" --data "This model is terrible :(" "http://localhost:8080/run"
Once tests are complete, stop the sentiment-analysis
container.
docker stop sentiment-analysis
Save the container to a .tar archive for upload to the API.
docker save -o sentiment-analysis-latest.tar sentiment-analysis:latest
Deploy your model
To deploy your model, gather your relevant model metadata, push the container image to a Docker registry, and go to the Model Deployment.
Updated 9 months ago