How to train yolov5 on a custom dataset

Time:2021-10-13

By Jacob solawetz, Joseph Nelson
Compile Flin
Source | blog

With the introduction of yolov5, the target detection model of Yolo series becomes more and more powerful. In this article, we will introduce how to train yolov5 to identify custom objects for your custom use cases.

Thank ultralytics for bringing this repository together. We believe that combined with clean data management tools, this technology can be easily used by any developer who wants to deploy computer vision projects in their projects.

We use the public blood cell test data set, which you can export yourself. You can also use this tutorial on your own custom data.

In order to train the detector, we take the following steps:

  • Installing yolov5 dependencies

  • Download custom yolov5 object detection data

  • Define yolov5 model configuration and architecture

  • Train a custom yolov5 detector

  • Evaluate yolov5 performance

  • Visual yolov5 training data

  • Run yolov5 inference on the test image

  • Export the saved yolov5 weights for future inference

Yolov5: what’s new?

Just two months ago, we were very excited about introducing efficientdet into Google brain and wrote some blog articles about efficientdet. We think this model may surpass the prominent position of the Yolo family in the field of real-time target detection – it turns out that we are wrong.

Within three weeks, yolov4 was released under the framework of Darknet, and we also wrote more articles on decomposing yolov4 research.

A few hours before writing this article, yolov5 was released, and we found it very clear.

Yolov5 is written in the ultralytics Python framework. It is very intuitive to use and reasoning speed is very fast. In fact, we and many people often convert yolov3 and yolov4 Darknet weights to ultralytics pytorch weights to reason faster using lighter libraries.

Does yolov5 perform better than yolov4? We will introduce it to you soon. You may have a preliminary guess about yolov5 and yolov4.


Performance comparison between yolov5 and efficientdet

Yolov4 is obviously not evaluated in the yolov5 repository. In other words, yolov5 is easier to use, and it performs very well on the custom data we originally ran.

We suggest that you perform the following operations in the yolov5 colab notebook at the same time.

Installing the yolov5 environment

Starting with yolov5, we first clone the yolov5 repository and install the dependencies. This will set up our programming environment ready to run object detection training and reasoning commands.

!git clone https://github.com/ultralytics/yolov5  # clone repo
!pip install -U -r yolov5/requirements.txt  # install dependencies

%cd /content/yolov5

Then we can take a look at our training environment provided to our Google colab for free.

import torch
from IPython.display import Image  # for displaying images
from utils.google_utils import gdrive_download  # for downloading models/datasets

print('torch %s %s' % (torch.__version__, torch.cuda.get_device_properties(0) if torch.cuda.is_available() else 'CPU'))

It is likely that you will receive a Tesla P100 GPU from Google colab. The following is what I received:

torch 1.5.0+cu101 _CudaDeviceProperties(name='Tesla P100-PCIE-16GB', major=6, minor=0, total_memory=16280MB, multi_processor_count=56)

GPU allows us to speed up our training time. Colab is also good because it’s pre installedtorchandcuda。 If you try to use this tutorial locally, you may need to perform other steps to set up yolov5.

Download custom yolov5 object detection data

In this tutorial, we will download custom object detection data in yolov5 format from roboflow. In this tutorial, we train yolov5 to detect cells in the blood flow using the common blood cell detection data set. You can use the public blood cell dataset or upload your own dataset.

Quick instructions on marking tools

If you have unmarked images, you need to mark them first. For free open source tagging tools, we recommendLabelimg getting startedorGetting started with cvat annotation toolA guide to. Try tagging about 50 images before continuing with this tutorial. To improve the performance of the model in the future, you will need to add more tags.

Once you have marked the data, to move the data to roboflow, please create a free account, and then you can drag the dataset in any format: (VOC XML, coco JSON, tensorflow object detection CSV, etc.).

After uploading, you can select preprocessing and enhancement steps:


Settings selected for BCCD sample dataset

Then, click generate and download, and you will be able to select the yolov5 Python format.


Select “Yolo V5 Python”

When prompted, be sure to select “show code snippet”. This will output a download curl script so that you can easily migrate the data to colab in the correct format.

curl -L "https://public.roboflow.ai/ds/YOUR-LINK-HERE" > roboflow.zip; unzip roboflow.zip; rm roboflow.zip

Downloading from colab

Download a custom object dataset in yolov5 format

Export will create a yov5.yaml file named data.yaml, specifying the location of the yov5 images folder, yov5 labels folder and the information of the custom class.

Define yolov5 model configuration and architecture

Next, we write a model configuration file for our custom object detector. In this tutorial, we chose the smallest and fastest basic model of yolov5. You can choose from other yolov5 models, including:

  • YOLOv5s
  • YOLOv5m
  • YOLOv5l
  • YOLOv5x

You can also edit the network structure in this step, but you generally do not need to do so. The following is the yolov5 model configuration file, which we named ascustom_yolov5s.yaml

nc: 3
depth_multiple: 0.33
width_multiple: 0.50

anchors:
  - [10,13, 16,30, 33,23] 
  - [30,61, 62,45, 59,119]
  - [116,90, 156,198, 373,326] 

backbone:
  [[-1, 1, Focus, [64, 3]],
   [-1, 1, Conv, [128, 3, 2]],
   [-1, 3, Bottleneck, [128]],
   [-1, 1, Conv, [256, 3, 2]],
   [-1, 9, BottleneckCSP, [256]],
   [-1, 1, Conv, [512, 3, 2]], 
   [-1, 9, BottleneckCSP, [512]],
   [-1, 1, Conv, [1024, 3, 2]],
   [-1, 1, SPP, [1024, [5, 9, 13]]],
   [-1, 6, BottleneckCSP, [1024]],
  ]

head:
  [[-1, 3, BottleneckCSP, [1024, False]],
   [-1, 1, nn.Conv2d, [na * (nc + 5), 1, 1, 0]],
   [-2, 1, nn.Upsample, [None, 2, "nearest"]],
   [[-1, 6], 1, Concat, [1]],
   [-1, 1, Conv, [512, 1, 1]],
   [-1, 3, BottleneckCSP, [512, False]],
   [-1, 1, nn.Conv2d, [na * (nc + 5), 1, 1, 0]],
   [-2, 1, nn.Upsample, [None, 2, "nearest"]],
   [[-1, 4], 1, Concat, [1]],
   [-1, 1, Conv, [256, 1, 1]],
   [-1, 3, BottleneckCSP, [256, False]],
   [-1, 1, nn.Conv2d, [na * (nc + 5), 1, 1, 0]],

   [[], 1, Detect, [nc, anchors]],
  ]

Training custom yolov5 detector

oursdata.yamlandcustom_yolov5s.yamlThe papers are ready. We’re ready for training!

To start the workout, we run the workout command with the following options:

  • img: defines the size of the input image

  • batch: determine batch size

  • epochs: defines the number of training periods. (Note: 3000 + is common in general!)

  • data: set the path to the yaml file

  • cfg: specify our model configuration

  • weights: Specifies the custom path for the weight. (Note: you can download weights from the ultralytics Google drive folder)

  • name: result name

  • nosave: save only the last checkpoint

  • cache: cache images to speed up training

Run training command:


Train the customized yolov5 detector. It trains very fast!

During training, you want to see [email protected] To learn how your probe works, see this article.

Evaluate the performance of custom yolov5 detector

Now that we have completed the training, we can evaluate the implementation of the training process by viewing the validation indicators. The training script will delete the tensorboard log. We visualize it:

Visualize the tensorboard results on our custom dataset

If you can’t visualize the tensor board for some reason, you can also use utils. Plot_ Result to draw and save as result.png.

I stopped training early. You need to get the trained model weights where the validation map reaches its highest point.

Visual yolov5 training data

During the training process, yolov5 training pipeline creates batch training data through enhancement. We can visualize the authenticity of training data and enhance training data.

Our real training data

Our training data are enhanced with automatic yolov5

Run yolov5 inference on the test image

Now we use our trained model to infer the test image. After the training, the model weights are saved as weights /.

For reasoning, we call these weights and a conf specifying the confidence of the model (the higher the required confidence, the less the prediction) and a reasoning source. The source can accept a directory containing images, individual images, video files, and the webcam port of the device. For the source code, I willtest/*jpgMove totest-infer/

!python detect.py --weights weights/last_yolov5s_custom.pt --img 416 --conf 0.4 --source ../test_infer

Reasoning time is very fast. On our Tesla P100, yolov5s reaches 142 frames per second!!

Yolov5s is inferred at a speed of 142 FPS (0.007s / image)

Finally, we visualize our detector inference on the test image.

Yolov5 reasoning of test image

Export the saved yolov5 weights for future inference

Now that our customized yolov5 object detector has been verified, we may need to take the weight from colab for real-time computer vision tasks. To do this, we import a Google driver module and send it out.

from google.colab import drive
drive.mount('/content/gdrive')

%cp /content/yolov5/weights/last_yolov5s_custom.pt /content/gdrive/My\ Drive

conclusion

We hope you enjoy training your custom yolov5 detector!

It is very convenient to use yolov5. Yolov5 trained quickly, reasoned quickly and performed well.
Let’s get it out!

Original link:https://blog.roboflow.ai/how-to-train-yolov5-on-a-custom-dataset/

Welcome to panchuang AI blog:
http://panchuang.net/

Official Chinese document of sklearn machine learning:
http://sklearn123.com/

Welcome to panchuang blog resources summary station:
http://docs.panchuang.net/