-
Notifications
You must be signed in to change notification settings - Fork 82
About verification, the accuracy is 0 #78
Comments
I'm having the same problem now, did you fix it now? |
i am also having the same problem [08/25 12:39:36] d2.engine.defaults INFO: Evaluation results for Clipart1k_test in csv format: when i do eval only i get 0 AP, the teacher AP is zero, any fixes? |
It is burn-in stage, where only the student model is trained with labeled data. you can find the setting in init function in trainer |
Hello, when I execute the verification command, the accuracy is 0. But in the process of training, there are scores in the verification. What's the matter?
command: python train_net.py --eval-only --num-gpus 8 --config configs/Faster-RCNN/coco-standard/faster_rcnn_R_50_FPN_ut2_sup10_run0.yaml --dist-url tcp://127.0.0.1:50158 MODEL.WEIGHTS output/model_0164999.pth SOLVER.IMG_PER_BATCH_LABEL 8 SOLVER.IMG_PER_BATCH_UNLABEL 8
outputs:
[07/25 10:04:27 d2.evaluation.evaluator]: Inference done 27/625. Dataloading: 0.0290 s/iter. Inference: 0.1645 s/iter. Eval: 0.0001 s/iter. Total: 0.1937 s/iter. ETA=0:01:55
[07/25 10:04:32 d2.evaluation.evaluator]: Inference done 50/625. Dataloading: 0.0222 s/iter. Inference: 0.1849 s/iter. Eval: 0.0001 s/iter. Total: 0.2073 s/iter. ETA=0:01:59
[07/25 10:04:37 d2.evaluation.evaluator]: Inference done 75/625. Dataloading: 0.0198 s/iter. Inference: 0.1888 s/iter. Eval: 0.0001 s/iter. Total: 0.2094 s/iter. ETA=0:01:55
[07/25 10:04:42 d2.evaluation.evaluator]: Inference done 97/625. Dataloading: 0.0200 s/iter. Inference: 0.1936 s/iter. Eval: 0.0001 s/iter. Total: 0.2142 s/iter. ETA=0:01:53
[07/25 10:04:47 d2.evaluation.evaluator]: Inference done 123/625. Dataloading: 0.0206 s/iter. Inference: 0.1893 s/iter. Eval: 0.0001 s/iter. Total: 0.2105 s/iter. ETA=0:01:45
[07/25 10:04:52 d2.evaluation.evaluator]: Inference done 150/625. Dataloading: 0.0207 s/iter. Inference: 0.1850 s/iter. Eval: 0.0001 s/iter. Total: 0.2063 s/iter. ETA=0:01:37
[07/25 10:04:57 d2.evaluation.evaluator]: Inference done 178/625. Dataloading: 0.0200 s/iter. Inference: 0.1815 s/iter. Eval: 0.0002 s/iter. Total: 0.2026 s/iter. ETA=0:01:30
[07/25 10:05:02 d2.evaluation.evaluator]: Inference done 202/625. Dataloading: 0.0203 s/iter. Inference: 0.1817 s/iter. Eval: 0.0004 s/iter. Total: 0.2033 s/iter. ETA=0:01:25
[07/25 10:05:08 d2.evaluation.evaluator]: Inference done 229/625. Dataloading: 0.0209 s/iter. Inference: 0.1794 s/iter. Eval: 0.0005 s/iter. Total: 0.2016 s/iter. ETA=0:01:19
[07/25 10:05:13 d2.evaluation.evaluator]: Inference done 251/625. Dataloading: 0.0208 s/iter. Inference: 0.1830 s/iter. Eval: 0.0004 s/iter. Total: 0.2050 s/iter. ETA=0:01:16
[07/25 10:05:18 d2.evaluation.evaluator]: Inference done 276/625. Dataloading: 0.0204 s/iter. Inference: 0.1834 s/iter. Eval: 0.0005 s/iter. Total: 0.2051 s/iter. ETA=0:01:11
[07/25 10:05:23 d2.evaluation.evaluator]: Inference done 299/625. Dataloading: 0.0207 s/iter. Inference: 0.1843 s/iter. Eval: 0.0005 s/iter. Total: 0.2063 s/iter. ETA=0:01:07
[07/25 10:05:28 d2.evaluation.evaluator]: Inference done 326/625. Dataloading: 0.0211 s/iter. Inference: 0.1827 s/iter. Eval: 0.0005 s/iter. Total: 0.2050 s/iter. ETA=0:01:01
[07/25 10:05:33 d2.evaluation.evaluator]: Inference done 350/625. Dataloading: 0.0213 s/iter. Inference: 0.1836 s/iter. Eval: 0.0005 s/iter. Total: 0.2060 s/iter. ETA=0:00:56
[07/25 10:05:39 d2.evaluation.evaluator]: Inference done 374/625. Dataloading: 0.0212 s/iter. Inference: 0.1843 s/iter. Eval: 0.0005 s/iter. Total: 0.2066 s/iter. ETA=0:00:51
[07/25 10:05:44 d2.evaluation.evaluator]: Inference done 398/625. Dataloading: 0.0212 s/iter. Inference: 0.1847 s/iter. Eval: 0.0004 s/iter. Total: 0.2070 s/iter. ETA=0:00:46
[07/25 10:05:49 d2.evaluation.evaluator]: Inference done 425/625. Dataloading: 0.0214 s/iter. Inference: 0.1835 s/iter. Eval: 0.0004 s/iter. Total: 0.2060 s/iter. ETA=0:00:41
[07/25 10:05:54 d2.evaluation.evaluator]: Inference done 455/625. Dataloading: 0.0213 s/iter. Inference: 0.1811 s/iter. Eval: 0.0004 s/iter. Total: 0.2035 s/iter. ETA=0:00:34
[07/25 10:05:59 d2.evaluation.evaluator]: Inference done 485/625. Dataloading: 0.0216 s/iter. Inference: 0.1789 s/iter. Eval: 0.0004 s/iter. Total: 0.2015 s/iter. ETA=0:00:28
[07/25 10:06:04 d2.evaluation.evaluator]: Inference done 509/625. Dataloading: 0.0216 s/iter. Inference: 0.1793 s/iter. Eval: 0.0004 s/iter. Total: 0.2018 s/iter. ETA=0:00:23
[07/25 10:06:09 d2.evaluation.evaluator]: Inference done 529/625. Dataloading: 0.0219 s/iter. Inference: 0.1809 s/iter. Eval: 0.0004 s/iter. Total: 0.2038 s/iter. ETA=0:00:19
[07/25 10:06:14 d2.evaluation.evaluator]: Inference done 554/625. Dataloading: 0.0220 s/iter. Inference: 0.1810 s/iter. Eval: 0.0004 s/iter. Total: 0.2040 s/iter. ETA=0:00:14
[07/25 10:06:20 d2.evaluation.evaluator]: Inference done 577/625. Dataloading: 0.0222 s/iter. Inference: 0.1816 s/iter. Eval: 0.0004 s/iter. Total: 0.2047 s/iter. ETA=0:00:09
[07/25 10:06:25 d2.evaluation.evaluator]: Inference done 606/625. Dataloading: 0.0223 s/iter. Inference: 0.1802 s/iter. Eval: 0.0004 s/iter. Total: 0.2034 s/iter. ETA=0:00:03
[07/25 10:06:27 d2.evaluation.evaluator]: Total inference time: 0:02:05.097037 (0.201769 s / iter per device, on 8 devices)
[07/25 10:06:27 d2.evaluation.evaluator]: Total inference pure compute time: 0:01:49 (0.177313 s / iter per device, on 8 devices)
[07/25 10:06:33 d2.evaluation.coco_evaluation]: Preparing results for COCO format ...
[07/25 10:06:33 d2.evaluation.coco_evaluation]: Saving results to ./output/inference/coco_instances_results.json
[07/25 10:06:33 d2.evaluation.coco_evaluation]: Evaluating predictions with unofficial COCO API...
Loading and preparing results...
DONE (t=0.05s)
creating index...
index created!
[07/25 10:06:34 d2.evaluation.fast_eval_api]: Evaluate annotation type bbox
[07/25 10:06:49 d2.evaluation.fast_eval_api]: COCOeval_opt.evaluate() finished in 15.41 seconds.
[07/25 10:06:49 d2.evaluation.fast_eval_api]: Accumulating evaluation results...
[07/25 10:06:51 d2.evaluation.fast_eval_api]: COCOeval_opt.accumulate() finished in 1.58 seconds.
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.000
[07/25 10:06:51 d2.evaluation.coco_evaluation]: Evaluation results for bbox:
The text was updated successfully, but these errors were encountered: