Skip to content

Commit

Permalink
Update README_SBeA_tracker.md
Browse files Browse the repository at this point in the history
  • Loading branch information
YNCris committed Apr 7, 2023
1 parent af702c6 commit 2755d93
Showing 1 changed file with 40 additions and 35 deletions.
75 changes: 40 additions & 35 deletions README_SBeA_tracker.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,16 @@ Case data:
## Run
### 1. Create/load project
SBeA_tracker is managed in a project folder. The first step is to create or load a project.

GUI:
![image](https://github.com/YNCris/SBeA_release/blob/main/demo/tracker_gui.png)

Input:
Create A New 'configfile.yaml'. Select a path and define a name to save your project (workspace). Or load existing 'configfile.yaml' in the first textbox.

Output:
The workspace of your project.

Demo results:
![image](https://github.com/YNCris/SBeA_release/blob/main/demo/create_project.png)
Notes:
Expand All @@ -33,17 +43,18 @@ evals folder: save model evaluation results
models folder: save deep learning models
configfile.yaml: the configuration of SBeA_tracker

GUI:
![image](https://github.com/YNCris/SBeA_release/blob/main/demo/tracker_gui.png)

### 2. Set configurations
The second step is to set the configurations for training.

GUI:
![image](https://github.com/YNCris/SBeA_release/blob/main/demo/config_gui.png)

Input:
Create A New 'configfile.yaml'. Select a path and define a name to save your project (workspace). Or load existing 'configfile.yaml' in the first textbox.
Your path of social and ID data.

Output:
The workspace of your project.

### 2. Set configurations
The second step is to set the configurations for training.
Changed configfile.yaml.

Your data need to be organized like:
Social data:
Expand All @@ -66,17 +77,17 @@ F1: the recording serial number
F2: the single animal name such as A1 (animal one)
F3: the date

### 3. Load data
The third step is to load data according to configfile.yaml. If the train and ID data are not in the path in step 2, this part is going to copy them in these folders. Additionally, the raw frames, trajectories, backgrounds, and frames for labeling are extracted in this step. If the train and ID data are existing in these folders, you only need to run load train videos without filling any testbox.

GUI:
![image](https://github.com/YNCris/SBeA_release/blob/main/demo/config_gui.png)
![image](https://github.com/YNCris/SBeA_release/blob/main/demo/load_gui.png)

Input:
Your path of social and ID data.
Social and ID data.

Output:
Changed configfile.yaml.

### 3. Load data
The third step is to load data according to configfile.yaml. If the train and ID data are not in the path in step 2, this part is going to copy them in these folders. Additionally, the raw frames, trajectories, backgrounds, and frames for labeling are extracted in this step. If the train and ID data are existing in these folders, you only need to run load train videos without filling any testbox.
The preprocessed data.

Demo results:
The raw frames in .\datasets\raw_video_images
Expand All @@ -91,28 +102,10 @@ The trajectories in .\datasets\video_trajectories
The manual label frames in .\datasets\manual_labels
![image](https://github.com/YNCris/SBeA_release/blob/main/demo/manual_label_data.png)

GUI:
![image](https://github.com/YNCris/SBeA_release/blob/main/demo/load_gui.png)

Input:
Social and ID data.

Output:
The preprocessed data.

### 4. Label and train
The fourth step is to label frames and train models. In this part, you can label frames and train models in parallel, which save the running time.

Demo results:
Label mask frames calling [labelme](https://github.com/wkentaro/labelme):
![image](https://github.com/YNCris/SBeA_release/blob/main/demo/label_frames.png)

Label pose frames calling [DeepLabCut](https://github.com/DeepLabCut/DeepLabCut):
![image](https://github.com/YNCris/SBeA_release/blob/main/demo/label_poses.png)
The fourth step is to label frames and train models. In this part, you can label frames and train models in parallel, which save the running time.

Training data generation based on [YOLACT++](https://github.com/dbolya/yolact):
![image](https://github.com/YNCris/SBeA_release/blob/main/demo/virtual_data.png)

GUI:
![image](https://github.com/YNCris/SBeA_release/blob/main/demo/labeltrain_gui.png)

Expand All @@ -126,9 +119,6 @@ Well-trained video instance segmentation model based on [VisTR](https://github.c
Well-trained single animal pose estimation model.
Well-trained animal identification model based on [EfficientNet](https://github.com/lukemelas/EfficientNet-PyTorch).

### 4. Evaluation (Optional)
The fourth step is to label frames and train models. In this part, you can label frames and train models in parallel, which save the running time.

Demo results:
Label mask frames calling [labelme](https://github.com/wkentaro/labelme):
![image](https://github.com/YNCris/SBeA_release/blob/main/demo/label_frames.png)
Expand All @@ -139,6 +129,10 @@ Label pose frames calling [DeepLabCut](https://github.com/DeepLabCut/DeepLabCut)
Training data generation based on [YOLACT++](https://github.com/dbolya/yolact):
![image](https://github.com/YNCris/SBeA_release/blob/main/demo/virtual_data.png)


### 4. Evaluation (Optional)
The fourth step is to label frames and train models. In this part, you can label frames and train models in parallel, which save the running time.

GUI:
![image](https://github.com/YNCris/SBeA_release/blob/main/demo/labeltrain_gui.png)

Expand All @@ -152,6 +146,17 @@ Well-trained video instance segmentation model based on [VisTR](https://github.c
Well-trained single animal pose estimation model.
Well-trained animal identification model based on [EfficientNet](https://github.com/lukemelas/EfficientNet-PyTorch).

Demo results:
Label mask frames calling [labelme](https://github.com/wkentaro/labelme):
![image](https://github.com/YNCris/SBeA_release/blob/main/demo/label_frames.png)

Label pose frames calling [DeepLabCut](https://github.com/DeepLabCut/DeepLabCut):
![image](https://github.com/YNCris/SBeA_release/blob/main/demo/label_poses.png)

Training data generation based on [YOLACT++](https://github.com/dbolya/yolact):
![image](https://github.com/YNCris/SBeA_release/blob/main/demo/virtual_data.png)


### 5. Predict
The fifth step is to predict 3D poses with identities of new videos.

Expand Down

0 comments on commit 2755d93

Please sign in to comment.