Setup a TorchBlaze ML Project
TorchBlaze allows you to setup your ML project within no time! With our template, you will be ready to jump in to making your model and training it without worrying about the hassles of testing and deployment, and leaving that upto us. The template is just to display a basic example to get you started on your journey with TorchBlaze and build some great things!
#
Getting Started with the templateGenerate the template for your project using the following command:
This will create a directory titled <your_project_name>
and contain the template files.
note
Before proceeding, make sure you have torch installed on your system/virtual environment. In case you do not have it installed, you can install your relevant PyTorch version from here.
Install the required python packages for running the template example using the following command:
You are now set to modify the template to your liking and continue with the deployment!
#
Running the templateTo begin training the existing model definition present in model/model.py
and save the final model, you can execute:
You can supply any of the command-line arguments defined in model/train.py
to enhance the model training in several ways. Once your model is trained and saved, you can either execute your ready-to-deploy flask-api on your local machine like this:
or you could generate a docker image to wrap your flask-api and model with the command:
Once you are convinced your API is ready for deployment, you can use the Procfile
that has been generated to deploy your API as a Heroku App.
#
Contents of the Template#
# model/model.py#
File PurposeThis file should contain the:
- model layers definition
- model architecture definition
#
File Contents:- Model definition defined in
__init__(self)
which takes as input a batch of 28x28 grayscale images and outputs a batch of softmax-probability vectors, each of length 10. - CNN based model for classifying handwritten numbers with the model architecture defined in
forward()
.
#
# model/train.py#
File PurposeThis file should contain the functionality for:
- setting hyperparameters through methods such as command-line arguments or config files.
- Loading and Pre-processing data for model input
- Creating model, optimizer and criterion. (optional: scheduler)
- Training Loop to iterate once over the complete training dataset and train the model.
- Testing Loop to perform inference once over the complete testing dataset.
- Epoch Loop to call the training loop a certain number of times as defined in hyperparameters
- Evaluate model on test set to infer the model accuracy
- Ability to save the model to disk.
#
File Contents:- Hyperparameter setup using command-line arguments
- Loading MNIST dataset using
torchvision.Datasets
and storing it in a Dataloader. - Creating model from
model/model.py
and usingAdadelta
optimizer and NLL Loss. - Training Loop in
train
function. - Unit Testing using
torchblaze.mltests
which is clearly detailed here - Testing Loop in
test
function. - Epoch Loop, model evaluation and model saving in
main
function.
#
# app.py#
File PurposeThis file should contain the functionality for:
- creating flask-restful API from flask App.
- listen for POST or GET request made by a client.
- parse input from request to prepare as input to the model.
- parse the model output to return the model inference in a user-friendly manner.
#
File Contents:- Flask and Flask-RESTful packages are used to create the REST API.
- A
Resource
is created which listens to POST requests for image files. - The Resource is added to the API along with two other dummy resources for running API tests
- Input image file is resized and converted to a
torch.Tensor
, prepared for inference. - The argmax of model output gives the predicted class which is returned as a JSON to the client.
#
# model/utils.py#
File PurposeThis file should contain the functionality for any additional helper functions that are required for the other sections of the model code but do not fit into the core ML workflow as defined in the purpose of model/train.py
.
#
# ProcfileHeroku Apps require a Procfile which specifies the commands that would run on the start of a Heroku App. A more detailed explanation regarding Procfiles can be found here. The template provides a Procfile that you can directly use to deploy your API.
#
# DockerThis file is used to create the docker image for the API to make it runnable on any system. The Docker image generation has been described in detail here.
#
# tests.jsonThis file is used to create the dummy requests which are used by APITests to ensure the working of the API. The format and usage of the tests.json
file has been described in detail here