Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

re-freeze the model and the accurate was decreased by tf-nightly 1.13 #40

Closed
sammymx opened this issue Mar 7, 2019 · 4 comments
Closed

Comments

@sammymx
Copy link

sammymx commented Mar 7, 2019

Hi, victor

I was using the function: export_inference_graph.py, from newest version of tensorflow object detection API, to generate the frozen_inference_graph.pb. (tensorflow version: tf-nightly 1.13..., Ubuntu 18.04)
But the accuracy was so bad both in ssd_mobile_v1 and ssdlite_mobile_v2.
The confidences were inaccurate and the bounding boxes were also not in where they should be.
(A few boxes were in the right place, but the sizes were not accurate, and some confidences of the incorrect boxes were higher than the real hand boxes.)
The results were worse than what the frozen_inference_graph.pb in folder hand_inference_graph made.

Can you please help me to analyse the possible problems?
A wired phenomenon is that when running export_inference_graph.py to generate the frozen pb file, a group of files were generated at the same time, including pipeline.config, model.ckpt.data-00000-of 00001, and its index and meta files, which are different from the original .config and .ckpt files.

@victordibia
Copy link
Owner

victordibia commented Mar 7, 2019 via email

@sammymx
Copy link
Author

sammymx commented Mar 8, 2019

Thank you very much.
Very pleasure if you can confirm that the frozen.pb in folder hand_inference_graph was generated from ./model-checkpoint/ssdmobilenetv1?

I found two facts:

  1. ssdlitemobilenetv2 is worse than ssdmobilenetv1. The reason might be that the num_steps of ssdlitemobilenetv2 was just 20000, far less than 200000 of ssdmobilenetv1.
  2. The format of the .config files of them are different. The ssdlitemobilenetv2's format is same with the model downloaded from tf model zoo. And also, I have got the same results by both using the frozen.pb in ssdlitemobilenetv2 and that I generated by the tf function.

Anyway, I just want to clear that the difference between tf versions could create the difference of the accuracy during the conversion of the models.

Ha! I haven’t tried exporting new nightly! Is it possible that what you have installed is TensorFlow 2.0 api ? I know there are changes to the news api that pretty much break things for tf 1.x. I’d begin with searching for side effects for the object detection api when used with 1.13 nightly .

Sent from my iPhone
On Mar 7, 2019, at 07:39, sammymx @.***> wrote: Hi, victor I was using the function: export_inference_graph.py, from newest version of tensorflow object detection API, to generate the frozen_inference_graph.pb. (tensorflow version: tf-nightly 1.13..., Ubuntu 18.04) But the accuracy was so bad both in ssd_mobile_v1 and ssdlite_mobile_v2. The confidences were inaccurate and the bounding boxes were also not in where they should be. (A few boxes were in the right place, but the sizes were not accurate, and some confidences of the incorrect boxes were higher than the real hand boxes.) Can you please help me to analyse the possible problems? A wired phenomenon is that when running export_inference_graph.py to generate the frozen pb file, a group of files were generated at the same time, including pipeline.config, model.ckpt.data-00000-of 00001, and its index and meta files, which are different from the original .config and .ckpt files. — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or mute the thread.

@victordibia
Copy link
Owner

Hi @sammymx

Yes, frozen model was generated from model-checkpoint v1.
I tested v1 more carefully (a while ago though) and I'd say you focus more on this version. v2 checkpoints was done more recently using the recent version of the object detection api.

Now as to accuracy, I have less training steps mainly because an observation on MAP/loss showed there was not much improvement after this. While I have not exported the v2 model to a frozen graph and tested in python, I have converted it to the tensorflow.js webmodel format and I confirm it works as expected. See demo here

Perhaps try installing an older version of tensorflow e.g. 1.10, export the graph and try again?

bossflip

@victordibia
Copy link
Owner

Closing ..
feel free to reopen if still having issues.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants