Using fastai to develop and deploy image classifier applications


Author| [email protected]
Compile Flin
Source: analyticsvidhya


Fastai is a popular open source library for learning and practicing machine learning and deep learning. It was founded by Jeremy Howard and Rachel Thomas Its goal is to make deep learning resources more accessible. All the detailed resources, such as courses, software and research papers, are free of charge.

August 2020, fastai_ V2 release, this version is expected to be faster and more flexible to achieve deep learning framework. In 2020, fastai combines the core concepts of machine learning and deep learning. It also introduces users to important aspects of model production and deployment.

In this article, I will discuss In the first three lessons of the beginner course, we introduce the technology of building a fast and simple image classification model. As you build the model, you will also learn how to easily develop web applications for the model and deploy them to a production environment.

This article will follow Jeremy’s top-down teaching method in his course. You will first learn about training image classifiers. Details about the model used for classification will be explained later. To understand this article, you must have a knowledge of python, because fastai is written in Python and built on pytorch. It is recommended that you run this code in Google colab or gradient, because we need GPU access, and fastai can be easily installed on these two platforms.

Install, import and load datasets

!pip install -Uqq fastbook
import fastbook

from fastbook import *
from import *

Install fastai and import the necessary libraries. If you are using colab, you must provide access to the Google cloud hard disk to save files and images. You can download any image data set from sources like kaggle and Bing image search. There is also a large collection of images. I use the set of chest X-ray images.

path = Path ('/content/gdrive/My Drive/Covid19images')

Save the path of the dataset location in the path() object. If used Data set, you can use the following code:

path = untar_data(URLs.PETS)/'images'

This will download and extract images from the fastai pets dataset collection.

Check the image path and display some sample images in the dataset. I’ve used the python imaging Library (PIL) for this.
from PIL import Image
img ='/train/covid/1-s2.0-S1684118220300682-main.pdf-002-a2.png')

In this image classification problem, I will train the model to classify the X-ray image into cowid or no cowid class. The preprocessed dataset has been placed in separate covid and no covid folders (source: Christian tutiv) é nG á lvez)。

If you are using Data set, please use the following function to group the images according to the pet’s name:

def is_cat(x): return x[0].isupper()

Pets is a collection of cat and dog images. Cat pictures are marked with the first letter in capitals, so it’s easy to classify them.

Image transformation

Image transformation is the key step of training image model. It is also called data expansion. In order to avoid over fitting the model, image transformation is necessary. There are many ways to convert images, such as resizing, clipping, compressing, and filling. However, compression and filling will grab the original information in the image and add other pixels separately. Therefore, randomly adjusting the image size will produce good results.

As shown in the following example, in this method, random regions of each image are sampled in each period. This allows the model to understand more details of each image, so as to obtain higher accuracy.

Another important point to remember is to always transform only the training image without modifying the verification image. In the fastai library, this problem is handled by default.

item_tfms=Resize(128, ResizeMethod.Squish))
item_tfms=Resize(128, ResizeMethod.Pad, pad_mode='zeros')
item_tfms=RandomResizedCrop(128, min_scale=0.3) - 30% of the image area is zoomed by specifying 0.3

Fastai library through Aug_ The transforms function provides a set of standard extensions. If the image size is uniform, it can be applied in batches, saving a lot of training time.

tfms = aug_transforms(do_flip = True, flip_vert = False, mult=2.0)

The dataloaders class in fastai is very convenient for storing various objects used for training and verifying the model. If you want to customize the objects to be used during training, you can use the datablock class in combination with dataloaders.

data= ImageDataLoaders.from_folder(path,train = "train", valid_pct=0.2, item_tfms=Resize(128), batch_tfms=tfms, bs = 30, num_workers = 4)

If you define an image tag in the metafile, you can use datablock to divide the image and tag into two different blocks, as shown in the following code snippet. Use the defined data block with the data loader function to access the image.

Data = DataBlock( blocks=(ImageBlock, CategoryBlock), get_items=get_image_files, 
splitter=RandomSplitter(valid_pct=0.2, seed=42), get_y=parent_label, item_tfms=Resize(128))
dls = Data.dataloaders(path)

model training

In order to train the image data set, the pre trained CNN model is used. This method is called transfer learning. Jeremy suggests using pre trained models to speed up training and improve accuracy. This is especially applicable to computer vision problems.

learn = cnn_learner(data, resnet34, metrics=error_rate)

Use resnet34 architecture, and verify the results according to the error rate. Because the pre trained model is used for training, the fine-tuning method is used instead of fitting the model.

You can run more periods and see the performance of the model. Choose the right number of periods to avoid over fitting.

Instead of using error, you can try to use accuracy (accuracy = 1-error rate) to verify model performance_ rate。 Both are used to verify the output of the model. In this example, 20% of the data is reserved for validation. Therefore, the model will only train 80% of the data. This is a very critical step in checking the performance of any machine learning model. You can also run the model by changing the RESNET layer (options 18, 50, 101 and 152). Unless you have a large data set that will produce accurate results, this may lead to over fitting again.

Verify model performance

The performance of the model can be verified in different ways. A popular method is to use confusion matrix. The diagonal values of the matrix indicate the correct predictions for each category, while the other cell values indicate many wrong predictions.

interp = ClassificationInterpretation.from_learner(learn)

Fastai provides a useful feature to view erroneous predictions based on the highest loss rate. The output of this function indicates the prediction tag, target tag, loss rate and probability value of each image. High probability means that the model has high confidence. It varies between 0 and 1. A high loss rate indicates how poor the performance of the model is.

interp.plot_top_losses(5, nrows=1, figsize = (25,5))

Another great fastai feature, the image classifier cleaner (GUI), can remove the fault image by deleting it or renaming its label. This is very helpful for data preprocessing, thus improving the accuracy of the model.

Jeremy recommends running this function after basic training of the image, because it can understand the types of anomalies in the dataset.

from import *
cleaner = ImageClassifierCleaner(learn)

Save and deploy models

After training the model and satisfied with the results, the model can be deployed. To deploy a model to a production environment, you need to save the model architecture and the parameters that train it. For this purpose, the export method is used. The exported model is saved as pkl file, which is created by pickle (Python module).


Create an inference learner from the exported file that can be used to deploy the model as an application. The inference learner predicts the output of a new image one at a time. Prediction returns three parameters: prediction category, index of prediction category and probability of each category.

learn_inf = load_learner(path/'export.pkl')

(‘noCovid’, tensor(1), tensor([5.4443e-05, 9.9995e-01])) – prediction

There are several ways to create web applications for deploying models. One of the easiest ways is to use the IPython widget as a GUI component to create the required objects for the application in jupyter notebook.

from import *
btn_upload = widgets.FileUpload()
out_pl = widgets.Output()
lbl_pred = widgets.Label()

After designing the application elements, deploy the model using voila, which runs jupyter notebook like a web application. It removes all cell inputs and displays only the model output. To use notebook as a voice à To view the web application, please replace the word “notebook” in the browser URL with “voila / render”. Voila must be installed and executed in the same notebook that contains trained models and IPython widgets.

!pip install voila
!jupyter serverextension enable voila --sys-prefix


In this way, you have used the fastai library to build and deploy a cool image classifier application, just eight steps! This is just the tip of the iceberg I’ve shown in this article. There are more fastai components available for various deep learning use cases related to NLP and computer vision, and you can explore these components.

The following is the fastai learning resources, as well as my git repo, which contains the image classifier code and images explained in this article.

Link to the original text:

Welcome to panchuang AI blog:

Sklearn machine learning official Chinese document:

Welcome to pancreato blog Resource Hub: