-
Notifications
You must be signed in to change notification settings - Fork 621
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can't reproduce the reported accuracies in Inception-V4 and resnext101-64x4d. #52
Comments
I think you should pay attention on image pre-processing: The RGB channel or BGR channel; |
I am seeing the same problem as @latifisalar that I am not able to run Inception-V4 and see the desired accuracy on ILSVRC2012_val set. I am using mean val as 128 and crop size 395. |
Never mind. Figured it out - it is in the evaluation_cls.py |
I am using the classification script, with the correct configuration parameters on inceptionv3 and v4 and get really bad results. Even single image inference gives wrong results (miss-classified). The same scripts work well in other networks like vgg but not inception with pre-trained v3 - v4. Do you know how to fix that?? |
My problem was the label orders. The order of the class labels in Inception networks was different than the orders in the vgg network. |
So, what .txt file did you use for the annotation ?? This is a reference of the file I am using :
|
The one you have is for the vgg. |
I still get wrong results with that annotation list .. I have tried lot of annotation lists that i found online (including the one that you gave me) and it seems that non of them give the correct results... I think that this has to do with the model. |
Hi,
I'm facing problems with the caffe models of Inception-V4 and resnext101-64x4d. It's not possible for me to get the reported accuracies and I just get around 0.1% which is just a random guess. I've tried it using either my own python script(which is derived from caffe example) or the provided script(I'm aware of the crop_size and base_size and change them accordingly).
I've downloaded the validation images from ImageNet and using their own val.txt which is sorted unlike yours.
Do you have any idea what can be the problem?
Thanks
The text was updated successfully, but these errors were encountered: