What you already did:
- create ML algorithms,
- create Django web service, with ML code, database models for endpoints, algorithms, and requests,
- create predict view, which is routing requests to ML algorithms,
- create A/B testing code in the server.
In this chapter you will define docker container for our server code. With docker it is easy to deploy the code to selected infrastructure and it is easier to scale the service if needed.
Prepare the code
Before creating the docker definition we need to add some changes in the server code.
Please edit backend/server/server/settings.py
file and set ALLOWED_HOSTS
variable:
ALLOWED_HOSTS = ['0.0.0.0']
Additionally, set the STATIC_ROOT
, STATIC_URL
variables and the end of settings:
STATIC_ROOT = os.path.join(BASE_DIR, 'static')
STATIC_URL = '/static/'
Please add the requirements.txt
file in the project’s main directory:
Django==2.2.4
django-filter==2.2.0
djangorestframework==3.10.3
joblib==0.14.0
Markdown==3.1.1
numpy==1.17.3
pandas==0.25.2
requests==2.22.0
scikit-learn==0.21.3
Dockerfiles
Let’s define the docker files for nginx server and our server application. We will keep them in separate directories:
# please run in project's main directory
mkdir docker
mkdir docker/nginx
mkdir docker/backend
Please add file Dockerfile
in docker/nginx
directory:
# docker/nginx/Dockerfile
FROM nginx:1.13.12-alpine
CMD ["nginx", "-g", "daemon off;"]
Additionally, we will add nginx config file, please add docker/nginx/default.conf
file:
server {
listen 8000 default_server;
listen [::]:8000;
client_max_body_size 20M;
location / {
try_files $uri @proxy_api;
}
location @proxy_api {
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Url-Scheme $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://wsgiserver:8000;
}
location /static/ {
autoindex on;
alias /app/backend/server/static/;
}
}
Now, let’s define ‘Dockerfile’ for our server application. Please add file docker/backend/Dockerfile
:
FROM ubuntu:xenial
RUN apt-get update && \
apt-get install -y software-properties-common && \
add-apt-repository ppa:deadsnakes/ppa && \
apt-get update && \
apt-get install -y python3.6 python3.6-dev python3-pip
WORKDIR /app
COPY requirements.txt .
RUN rm -f /usr/bin/python && ln -s /usr/bin/python3.6 /usr/bin/python
RUN rm -f /usr/bin/python3 && ln -s /usr/bin/python3.6 /usr/bin/python3
RUN pip3 install -r requirements.txt
RUN pip3 install gunicorn==19.9.0
ADD ./backend /app/backend
ADD ./docker /app/docker
ADD ./research /app/research
RUN mkdir -p /app/backend/server/static
In this dockerfile, we load ubuntu system, and install all needed packages and switch default python to python 3.6. At the end, we copy the application code.
We will define starting script for our application. Please add docker/backend/wsgi-entrypoint.sh
file:
#!/usr/bin/env bash
echo "Start backend server"
until cd /app/backend/server
do
echo "Waiting for server volume..."
done
until ./manage.py migrate
do
echo "Waiting for database to be ready..."
sleep 2
done
./manage.py collectstatic --noinput
gunicorn server.wsgi --bind 0.0.0.0:8000 --workers 4 --threads 4
We will use this starting script to apply database migrations and creation of static files before application is stated with gunicorn.
We have dockerfiles defined for nginx server and our application. We will manage them with docker-compose
command. Let’s add docker-compose.yml
file in the main directory:
version: '2'
services:
nginx:
restart: always
image: nginx:1.12-alpine
ports:
- 8000:8000
volumes:
- ./docker/nginx/default.conf:/etc/nginx/conf.d/default.conf
- static_volume:/app/backend/server/static
wsgiserver:
build:
context: .
dockerfile: ./docker/backend/Dockerfile
entrypoint: /app/docker/backend/wsgi-entrypoint.sh
volumes:
- static_volume:/app/backend/server/static
expose:
- 8000
volumes:
static_volume: {}
To build docker images please run:
sudo docker-compose build
To start the docker images please run:
sudo docker-compose up
You should be able to see the running server at the address:
http://0.0.0.0:8000/api/v1/
Congratulations!
That was the last step of this tutorial. You have successfully created your own web service that can serve machine learning models. Congratulations!
The full code is available in the github https://github.com/pplonski/my_ml_service.
Feedback
I’m looking for your feedback! Please fill this form.
Next step
Please check our advanced tutorial as a next step.