Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What are the metrics on COCO dataset? #8

Open
anuar12 opened this issue Nov 26, 2019 · 1 comment
Open

What are the metrics on COCO dataset? #8

anuar12 opened this issue Nov 26, 2019 · 1 comment

Comments

@anuar12
Copy link

anuar12 commented Nov 26, 2019

Would you be great if you could share the metrics on COCO.
Thank you!

@anuar12
Copy link
Author

anuar12 commented Nov 28, 2019

I evaluated Densenet-121 from the provided model and got:

 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets= 20 ] = 0.292
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets= 20 ] = 0.551
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets= 20 ] = 0.274
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.213
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.412
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 20 ] = 0.364
 Average Recall     (AR) @[ IoU=0.50      | area=   all | maxDets= 20 ] = 0.593
 Average Recall     (AR) @[ IoU=0.75      | area=   all | maxDets= 20 ] = 0.363
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.227
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.553

which is lower than expected (as CMU-Pose was geeting 0.6+ on mAP)
Would you suggest what improvements I should make to increase mAP?
One thing that I noticed is that I can play with ParseObjects parameters too.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant