Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't reproduce the reported accuracies in Inception-V4 and resnext101-64x4d. #52

Open
latifisalar opened this issue Oct 13, 2017 · 8 comments

Comments

@latifisalar
Copy link

Hi,
I'm facing problems with the caffe models of Inception-V4 and resnext101-64x4d. It's not possible for me to get the reported accuracies and I just get around 0.1% which is just a random guess. I've tried it using either my own python script(which is derived from caffe example) or the provided script(I'm aware of the crop_size and base_size and change them accordingly).
I've downloaded the validation images from ImageNet and using their own val.txt which is sorted unlike yours.
Do you have any idea what can be the problem?
Thanks

@soeaver
Copy link
Owner

soeaver commented Oct 20, 2017

I think you should pay attention on image pre-processing:

The RGB channel or BGR channel;
The mean value and std value

@ashishfarmer
Copy link

I am seeing the same problem as @latifisalar that I am not able to run Inception-V4 and see the desired accuracy on ILSVRC2012_val set. I am using mean val as 128 and crop size 395.
@soeaver Could you share your transform_param that you used for the test?

@ashishfarmer
Copy link

Never mind. Figured it out - it is in the evaluation_cls.py

@kmonachopoulos
Copy link

kmonachopoulos commented Jan 25, 2018

I am using the classification script, with the correct configuration parameters on inceptionv3 and v4 and get really bad results. Even single image inference gives wrong results (miss-classified). The same scripts work well in other networks like vgg but not inception with pre-trained v3 - v4. Do you know how to fix that??

@latifisalar
Copy link
Author

My problem was the label orders. The order of the class labels in Inception networks was different than the orders in the vgg network.

@kmonachopoulos
Copy link

So, what .txt file did you use for the annotation ??

This is a reference of the file I am using :

1: 'goldfish, Carassius auratus',
 2: 'great white shark, white shark, man-eater, man-eating shark, Carcharodon carcharias',
 3: 'tiger shark, Galeocerdo cuvieri',
 4: 'hammerhead, hammerhead shark',
 5: 'electric ray, crampfish, numbfish, torpedo',
 6: 'stingray',
 7: 'cock',
 8: 'hen',

@latifisalar
Copy link
Author

latifisalar commented Jan 25, 2018

The one you have is for the vgg.
I've attached the synsets for the Inception networks.
synsets.txt
Update: New validation file for Inception:
inception_val.txt

@kmonachopoulos
Copy link

I still get wrong results with that annotation list .. I have tried lot of annotation lists that i found online (including the one that you gave me) and it seems that non of them give the correct results... I think that this has to do with the model.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants