Take 2 mins to deploy your machine learning model — Simple.
This article describes briefly how to deploy a machine learning model to production and to serve the model in real-time to your website/app. There are many good services and technologies available in cloud platforms like AWS, Azure, GCP to take care of the whole process with benefits like load balancing, scaling, notifications, texts, and campaigns. This article focuses on the basic version idea of deployment and serving the models, to those who are new and wanted to immediately deploy a model.
Let us take a sample scenario. Consider I have developed a model for a food delivering company or a fashion tech company to give recommendations to the user once they login into their app based on their history of Purchases, Click-stream events, Views, Store preferences, Geography, etc.
Let's break it into steps
- Create a REST API for the model using Flask.
- Dockerize the Flask files
Flask API:
Most people use Django or Flask for API generations. I am choosing Flask here because it is simple, flexible, and allow us how to implement code. Flask is very simple and it can be easily designed by using a wrapper @app.route(“/”) and for every path followed by localhost or DNS is given as the parameters of app.route() and that function where this wrapper is written will be triggered.
A simple Flask function will look like this:
from flask import Flask
app = Flask(__name__) @app.route("/")
def hello():
return "Hello, World!" if __name__ == "__main__":
app.run()
Debugging a Flask is as simple as mentioned below and you could get where your code is going wrong.
app.run('host' = localhost, debug = True)
If you are using Flask to build web-based services then it is better to use the Flask Debugtoolbar like this:
from flask import Flaskfrom flask_debugtoolbar import DebugToolbarExtensionapp = Flask(__name__)app.debug = Truetoolbar = DebugToolbarExtension(app)
@app.route('/form')def form(): return render_template('form.html')app.run(host='localhost', port=5000)
The parameters can be easily passed in the app.route(<>) within the <> like
@app.route('/users/<username>')
So a sample Flask code for a Machine Learning model will look like this:
from flask import Flask, jsonify, request
from flask_restful import Resource, Api, reqparse
from pymongo import MongoClient
import configparser
import os
"""
Like any normal python file write all the functions and make sure the functions are invoked on correct routes
"""
# Getting configuration data for service
config = configparser.ConfigParser()
config.readfp(open(r'./application.cfg'))
client_connection = config.get('Mongo', 'client')
db = config.get('Mongo', 'db')
# Defining Flask APP
app = Flask(__name__)
api = Api(app)
@app.route('/model', methods=['POST'])
def userRecommendation():
try:
req_data = request.get_json()
input1 = req_data['username']
input2 = req_data['Brand']
"""
1 ) Get all the inputs needed for your app from the post call
2 ) Process the script or fetch value from the db or what ever you want to do
3 ) Assign it to output and return it.
"""
return jsonify({'Result' : output})
except Exception as e:
return jsonify({"errors" : [{"status": "400", "code": "", "details": 'missing '+str(e)}]})
@app.errorhandler(404)
def page_not_found(e):
return jsonify({"errors" : [{"status": "404", "code": "", "details": "No such service request found"}]})
Docker:
Docker is one of the tools that used the idea of the isolated resources to create a set of tools that allows applications to be packaged with all the dependencies installed and ran wherever wanted. Docker defines the containers as follows:
A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another.
To know more about Docker, click here. This article gives a detailed explanation. To create a Docker you will need the followings:
- docker-compose — A file where you will mention all the versions, services, and the setting of the services like what port it should run, environment, container names. A sample would look like
version: "3.8"
services:
flask:
build: ./flask
container_name: flask
restart: always
environment:
- APP_NAME=MyFlaskApp
expose:
- 5080
nginx:
build: ./nginx
container_name: nginx
restart: always
ports:
- "5022:5022"
2. Dockerfile — This is the file where you will mention once a container starts and it has to run you code what all it should do. So from where it should pick the files which is the working directory, what are the different things to install etc.
FROM python:3
WORKDIR /app
# Make which ever folder you want to start your main function in here. I am giving /app
ADD . /app
RUN pip3 install -r requirements.txt
CMD [ "uwsgi", "app.ini" ]
3 Requirements file — where you will mention all the different packages and the versions so that the same versions are being pulled and pushed in different containers.
pymongo==3.11.0
pytz==2020.1
uWSGI==2.0.19.1
Werkzeug==1.0.1
aniso8601==8.0.0
click==7.1.2
configparser==5.0.0
Flask==1.1.12
Flask-RESTful==0.3.8
itsdangerous==1.1.0
Jinja2==2.2
MarkupSafe==1.1.1
Sample Git: git clone https://github.com/harsha89/ml-model-tutorial.git
You can then build the docker file like this:
docker build -t ml-model
Then use the API from the docker for serving it after running the model.
docker run -d -p 5000:5000 ml-model
Once the docker is running you can test it by passing your required parameters that are needed for your model.
curl --location --request POST 'http://localhost:5000/predict' --header 'Content-Type: application/json' --data-raw '{
"user": 9899999732,"Brand":"Puma"
}'
If you want to go a step further I suggest using Kubernetes and Docker together which will give scaling, loading, and app spawn at the next level. You can then have it behind a DNS and serve it like yourdns.com/<your path>.
Dont deliver a product, deliver an experience.
Regards,
Vigneshwar Ilango