How to use Yolo v5 object detection algorithm to detect custom objects – News Couple
ANALYTICS

How to use Yolo v5 object detection algorithm to detect custom objects


This article was published as part of the Data Science Blogathon

an introduction

In this article, I will explain to you how to use Yolo v5 algorithm to detect and classify different types of 60+ road traffic lights. We’ll start from very basic and cover every step like dataset setup, training, and testing. In this article we will use Windows 10 Machine.

YOLO is an acronym that stands for You Only Once. We are hiring Version 5launched by Ultralytics In June 2020 it is now the most advanced object identification algorithm. It is a new convolutional neural network (CNN) that detects objects in real time with great accuracy. This approach uses a single neural network to process the entire image, then separate it into parts and predict bounding boxes and probabilities for each component. These boxes are weighted around the expected probability. The method “looks only once” at the image meaning that it makes predictions after running only one forward propagation across the neural network. It then delivers the detected objects after blocking the cap (which ensures that the object detection algorithm selects each object only once).

(image source)

Its architecture mainly consists of three parts, namely –

1. Backbones: The backbone model is mostly used to extract key features from the input image. CSP (Cross-Phase Partial Networks) is used as a backbone in YOLO v5 to extract rich useful properties from the input image.

2. Neck: The neck model is most often used to create characteristic pyramids. Distinctive pyramid models help to generalize successfully when it comes to object measurement. It helps in recognizing the same object of different sizes and scales.
Distinguished pyramids are very useful in helping models perform efficiently on previously unseen data. Other models, such as FPN, BiFPN, and PANet, use different types of feature hierarchy approaches.

PANet is used as a neck in YOLO v5 to obtain distinct pyramids.

3. Head: The head of the model is mostly responsible for the final detection step. Uses bound boxes to create final output vectors with class probabilities, object scores, and bounding boxes.

The head of the YOLO v5 model is the same as that of the previous YOLO V3 and V4 versions.

Advantages and disadvantages of Yolo v5

  • It’s about 88% smaller than YOLOv4 (27MB vs. 244MB)
  • It’s about 180% faster than YOLOv4 (140fps vs 50fps)
  • It’s almost as accurate as YOLOv4 on the same mission (0.895 maps vs. 0.892 mph)
  • But the main problem is that for YOLOv5 there is no official paper released like other YOLO versions. Also, YOLO v5 is still in development and we receive frequent updates from ultrafineThe developers may update some settings in the future.

Table of contents

1. Set up the virtual environment in Windows 10.

2. Clone Yolo v5’s GitHub Repository.

3. Preparation and pre-processing of the data set.

4. Training the model.

5. Prediction and live testing.

Let’s get started, ­čĄŚ

Create a virtual environment

First we will setup the virtual environment, by running this command in your Windows Command Prompt-

1. Install Virtualenv (Run the following command to install the virtual environment)

$ pip install virtualenv

2. Create an environment (Run the following command to create the virtual environment)

$ py -m venv YoloV5_VirEnv

3. Activate it with this command (Run the following command to activate that environment)

$ YoloV5_VirEnvScriptsactivate

You can also deactivate it with (run the following command if you want to deactivate that environment)

$ deactivate

YOLO . preparation

After activating your virtual environment, clone this github Warehouse created and maintained by Ultralytics.

$ git clone https://github.com/ultralytics/yolov5
$ cd yolov5

Directory structure

Yolov 5 /

.github/

data/

Models/

pots /

.dockerignore

.gitattributes

.gitignore

.pre-commit-config.yaml

contribute. md

Discover.py

Dockerfile

export.py

hubconf.py

license

README.md

requirements. txt

setup.cfg

train.py

Tutorial.ipynb

val.py

Installing necessary libraries: Firstly, we will install all the necessary libraries required to do the image processing (OpenCV & Pillow), image classification with (Tensorflow & PyTorch), make manipulations with matrix (Numpy),

$ pip install -r requirements.txt

Prepare the data set

Download the full classified data set from that connection.

Then extract the zip file and transfer it to Yolov 5 / Guide.

1144 Images_Dataset /

train/

Test/

Create a file named as data.yaml inside your yolov5/ directory and paste the below code into it. This file will contain your labels and the path of the training and testing datasets.

train: 1144images_dataset/train
val: 1144images_dataset/test
nc: 77
names: ['200m',
        '50-100m',
        'Ahead-Left',
        'Ahead-Right',
        'Axle-load-limit',
        'Barrier Ahead',
        'Bullock Cart Prohibited',
        'Cart Prohobited',
        'Cattle',
        'Compulsory Ahead',
        'Compulsory Keep Left',
        'Compulsory Left Turn',
        'Compulsory Right Turn',
        'Cross Road',
        'Cycle Crossing',
        'Compulsory Cycle Track',
        'Cycle Prohibited',
        'Dangerous Dip',
        'Falling Rocks',
        'Ferry',
        'Gap in median',
        'Give way',
        'Hand cart prohibited',
        'Height limit',
        'Horn prohibited',
        'Humpy Road',
        'Left hair pin bend',
        'Left hand curve',
        'Left Reverse Bend',
        'Left turn prohibited',
        'Length limit',
        'Load limit 5T',
        'Loose Gravel',
        'Major road ahead',
        'Men at work',
        'Motor vehicles prohibited',
        'Nrrow bridge',
        'Narrow road ahead',
        'Straight prohibited',
        'No parking',
        'No stoping',
        'One way sign',
        'Overtaking prohibited',
        'Pedestrian crossing',
        'Pedestrian prohibited',
        'Restriction ends sign',
        'Right hair pin bend',
        'Right hand curve',
        'Right Reverse Bend',
        'Right turn prohibited',
        'Road wideness ahead',
        'Roundabout',
        'School ahead',
        'Side road left',
        'Side road right',
        'Slippery road',
        'Compulsory sound horn',
        'Speed limit',
        'Staggred intersection',
        'Steep ascent',
        'Steep descent',
        'Stop',
        'Tonga prohibited',
        'Truck prohibited',
        'Compulsory turn left ahead',
        'Compulsory right turn ahead',
        'T-intersection',
        'U-turn prohibited',
        'Vehicle prohibited in both directions',
        'Width limit',
        'Y-intersection',
        'Sign_C',
        'Sign_T',
        'Sign_S',
        'No entry',
        'Compulsory Keep Right',
        'Parking',
]

We will use 77 different categories

Model training with Yolo v5

Now, run this command to finally train your dataset. You can change the batch size based on your computer specifications. Training time depends on the performance of your computer, you prefer to use Google Colab.

You can also train different versions of the YOLOv5 algorithm, which can be found here. Everyone will take a different computational power and offer different combinations of FPS (frames per second) and accuracy.

In this article, we will use a file YOLOv5s version, because it is the simplest of all.

$ python train.py --data data.yaml --cfg yolov5s.yaml --batch-size 8 --name Model

Now inside runs / train / model /, you will find your final trained model.

runs/train/Model/ 
       weights/
            best.pt
            last.pt
       227359 events.out.tfevents.1638984167.LAPTOP-7CJ5UG09.6292.0
       hyp.yaml
       opt.yaml
       results.txt
       results.png
       train_batch0.jpg
       train_batch1.jpg
       train_batch2.jpg

better It contains your final form for the final disclosure and classification.

results.txt The file will contain a summary of the accuracy and losses achieved in each era.

The other images contain some plots and graphs that will be useful for further analysis.

Yolo v5 model test

Move your outside Yolov 5 / Directory and clone it Store This repo will contain the form test codes.

$ git clone https://github.com/aryan0141/RealTime_Detection-And-Classification-of-TrafficSigns
$ cd RealTime_Detection-And-Classification-of-TrafficSigns

Directory structure

RealTime_Detection-And-Classification-of-TrafficSigns/
    Codes
    Model
    Results
    Sample Dataset
    Test
    classes.txt
    Documentation.pdf
    README.md
    requirements.txt
    vidd1.mp4

Now copy the template we trained above and paste it into this guide.

Noticeable: I’ve already included a trainer form here at Model/ The manual, but you can also replace it with your own trained model.

Move inside this directory where your icons are.

$ cd Codes/

put yours Videos or Pictures in a Test/ Guide. I have also included some sample videos and photos for your reference.

to test images

$ python detect.py --source ../Test/test1.jpeg --weights ../Model/weights/best.pt

To test videos

$ python detect.py --source ../Test/vidd1.mp4 --weights ../Model/weights/best.pt

for webcam

$ python detect.py --source 0 --weights ../Model/weights/best.pt

Final photos and videos are stored in a file consequences/ Guide.

picture sample

frames per second(FPS) Depends on the GPU you used. I got around 50fps on my Nvidia MX 350 2GB graphics card.

This completes our discussion today about the project involving the YOLO v5 algorithm, and finally, we came up with one of our exciting data science projects.

LinkedIn

here My profile on Linkedin In case you want to contact me. I am glad to be in touch with you.

github

here My github profile to Find the full code I used in this article.

Final note

Thanks for reading!

Do not check my other blogging Also.

I hope you enjoyed the article. If you like it, share it with your friends too. Something not mentioned or want to share your thoughts? Feel free to comment below and I will get back to you.

The media described in this article is not owned by Analytics Vidhya and is used at the author’s discretion.



Source link

Related Articles

Back to top button