Parking_space_Python/Violation_stable_0429/PTZ/yolo/README.md

140 lines
5.6 KiB
Markdown
Raw Normal View History

2024-04-29 13:42:00 +08:00
# Yolov5 + StrongSORT with OSNet
<div align="center">
<p>
<img src="MOT16_eval/track_pedestrians.gif" width="400"/> <img src="MOT16_eval/track_all.gif" width="400"/>
</p>
<br>
<div>
<a href="https://github.com/mikel-brostrom/Yolov5_DeepSort_Pytorch/actions"><img src="https://github.com/mikel-brostrom/Yolov5_DeepSort_Pytorch/workflows/CI%20CPU%20testing/badge.svg" alt="CI CPU testing"></a>
<br>
<a href="https://colab.research.google.com/drive/18nIqkBr68TkK8dHdarxTco6svHUJGggY?usp=sharing"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>
</div>
</div>
## Introduction
This repository contains a highly configurable two-stage-tracker that adjusts to different deployment scenarios. The detections generated by [YOLOv5](https://github.com/ultralytics/yolov5), a family of object detection architectures and models pretrained on the COCO dataset, are passed to [StrongSORT](https://github.com/dyhBUPT/StrongSORT)[](https://arxiv.org/pdf/2202.13514.pdf) which combines motion and appearance information based on [OSNet](https://github.com/KaiyangZhou/deep-person-reid)[](https://arxiv.org/abs/1905.00953) in order to tracks the objects. It can track any object that your Yolov5 model was trained to detect.
## Tutorials
* [Yolov5 training (link to external repository)](https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data)&nbsp;
* [Deep appearance descriptor training (link to external repository)](https://kaiyangzhou.github.io/deep-person-reid/user_guide.html)&nbsp;
* [Yolov5 StrongSORT OSNet evaluation](https://github.com/mikel-brostrom/Yolov5_StrongSORT_OSNet/wiki/Evaluation)&nbsp;
* [StrongSORT ablation study](https://github.com/mikel-brostrom/Yolov5_StrongSORT_OSNet/wiki/Yolov5DeepSORTwithOSNet-vs-Yolov5StrongSORTwithOSNet-ablation-study-on-MOT16)&nbsp;
## Before you run the tracker
1. Clone the repository recursively:
`git clone --recurse-submodules https://github.com/mikel-brostrom/Yolov5_StrongSORT_OSNet.git`
If you already cloned and forgot to use `--recurse-submodules` you can run `git submodule update --init`
2. Make sure that you fulfill all the requirements: Python 3.8 or later with all [requirements.txt](https://github.com/mikel-brostrom/Yolov5_DeepSort_Pytorch/blob/master/requirements.txt) dependencies installed, including torch>=1.7. To install, run:
`pip install -r requirements.txt`
## Tracking sources
Tracking can be run on most video formats
```bash
$ python track.py --source 0 # webcam
img.jpg # image
vid.mp4 # video
path/ # directory
path/*.jpg # glob
'https://youtu.be/Zgi9g1ksQHc' # YouTube
'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream
```
## Select object detection and ReID model
### Yolov5
There is a clear trade-off between model inference speed and accuracy. In order to make it possible to fulfill your inference speed/accuracy needs
you can select a Yolov5 family model for automatic download
```bash
$ python track.py --source 0 --yolo-weights yolov5n.pt --img 640
yolov5s.pt
yolov5m.pt
yolov5l.pt
yolov5x.pt --img 1280
...
```
### StrongSORT
The above applies to StrongSORT models as well. Choose a ReID model based on your needs from this ReID [model zoo](https://kaiyangzhou.github.io/deep-person-reid/MODEL_ZOO)
```bash
$ python track.py --source 0 --strong-sort-weights osnet_x0_25_market1501.pt
osnet_x0_5_market1501.pt
osnet_x0_75_msmt17.pt
osnet_x1_0_msmt17.pt
...
```
## Filter tracked classes
By default the tracker tracks all MS COCO classes.
If you only want to track persons I recommend you to get [these weights](https://drive.google.com/file/d/1gglIwqxaH2iTvy6lZlXuAcMpd_U0GCUb/view?usp=sharing) for increased performance
```bash
python track.py --source 0 --yolo-weights yolov5/weights/crowdhuman_yolov5m.pt --classes 0 # tracks persons, only
```
If you want to track a subset of the MS COCO classes, add their corresponding index after the classes flag
```bash
python track.py --source 0 --yolo-weights yolov5s.pt --classes 16 17 # tracks cats and dogs, only
```
[Here](https://tech.amikelive.com/node-718/what-object-categories-labels-are-in-coco-dataset/) is a list of all the possible objects that a Yolov5 model trained on MS COCO can detect. Notice that the indexing for the classes in this repo starts at zero.
## MOT compliant results
Can be saved to your experiment folder `runs/track/<yolo_model>_<deep_sort_model>/` by
```bash
python track.py --source ... --save-txt
```
## Cite
If you find this project useful in your research, please consider cite:
```latex
@misc{yolov5-strongsort-osnet-2022,
title={Real-time multi-camera multi-object tracker using YOLOv5 and StrongSORT with OSNet},
author={Mikel Broström},
howpublished = {\url{https://github.com/mikel-brostrom/Yolov5_StrongSORT_OSNet}},
year={2022}
}
```
## Contact
For Yolov5 DeepSort OSNet bugs and feature requests please visit [GitHub Issues](https://github.com/mikel-brostrom/Yolov5_StrongSORT_OSNet/issues). For business inquiries or professional support requests please send an email to: yolov5.deepsort.pytorch@gmail.com