The purpose of this repository is to test pre-trained YOLOv8 models on a folder of images with minimal hyperparameter tuning. As such, feel free to modify scripts/detect.py
and the folder structure to your liking, or add new models to the model_zoo
folder when inferencing. Only YOLOv8 models are supported due to the slight changes in model architecture from YOLOv5. YOLOv9 models have not been tested yet.
In a real-world scenatio, please run the MegaDetector to remove all non-animal images before running a finer-grained detection and classification model, such as the ones here. YOLO models also work on video clips.
Install Git here and Mamba Miniforge according to the instructions here, and then open Miniforge Prompt
. This is essential for managing the packages required by this repository and their updates. Alternatively, you can also use your personal choice of package manager, e.g. Anaconda. Afterwards, you can follow the steps below to install and run yolo_model_zoo
. When running the following code, be sure to run them one line after another instead all together, in order to catch errors easier.
Navigate to a folder where you would like to have this repository saved and download it:
mkdir c:\git
cd c:\git
git clone https://github.com/jesstytam/yolo_model_zoo
Navigate into the repository's folder to create an environnent and install the YOLO package:
cd yolo_model_zoo
mkdir data\output
mamba create -n yolov8 python=3.10
mamba activate yolov8
pip install ultralytics
Navigate to a folder where you would like to have this repository saved and download it:
mkdir git
cd git
git clone https://github.com/jesstytam/yolo_model_zoo
Navigate into the repository's folder to create an environnent and install the YOLO package:
cd yolo_model_zoo
mkdir data/output
mamba create -n yolov8 python=3.10
mamba activate yolov8
pip install ultralytics
The default settings are as follows:
python scripts/detect.py
#OR (these 2 lines are the same)
python scripts/detect.py --model_name best.pt --folder_path data/input --save_detections False --confidence 0.1
The default setting calls YOLOv8 small model and saves the detection results (images with bounding boxes). However, it sets the confidence to 0.1
which is rather low. Fiddle around with these settings until you are happy wth your predictions.
Save your raw images are saved within data/input
for the detection task. Detection results are saved in data/output/detections.csv
YOLOv8n is the smallest model size of only 6MB. It should be able to run on consumer-level GPUs.
The dataset used for training the models here were part of the Ecoflow dataset. From the 26 classes, I extracted 1000 random images from 14 of those classes for model training. The species included in the training dataset are as follows:
0: Brown Bandicoot
1: Red-necked Wallaby
2: Brushtail Possum
3: Cat
4: Red Fox
5: Rabbit Hare
6: Dog (or Dingo)
7: Eastern Grey Kangaroo
8: Echidna
9: Pig
10: Euro
11: Fallow Deer
12: Long-nosed Bandicoot
13: Koala
If you have any suggestions, please create a new issue and I will respond when I have some free time.