-
Notifications
You must be signed in to change notification settings - Fork 461
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
re-freeze the model and the accurate was decreased by tf-nightly 1.13 #40
Comments
Ha!
I haven’t tried exporting new nightly!
Is it possible that what you have installed is TensorFlow 2.0 api ? I know there are changes to the news api that pretty much break things for tf 1.x. I’d begin with searching for side effects for the object detection api when used with 1.13 nightly .
…Sent from my iPhone
On Mar 7, 2019, at 07:39, sammymx ***@***.***> wrote:
Hi, victor
I was using the function: export_inference_graph.py, from newest version of tensorflow object detection API, to generate the frozen_inference_graph.pb. (tensorflow version: tf-nightly 1.13..., Ubuntu 18.04)
But the accuracy was so bad both in ssd_mobile_v1 and ssdlite_mobile_v2.
The confidences were inaccurate and the bounding boxes were also not in where they should be.
(A few boxes were in the right place, but the sizes were not accurate, and some confidences of the incorrect boxes were higher than the real hand boxes.)
Can you please help me to analyse the possible problems?
A wired phenomenon is that when running export_inference_graph.py to generate the frozen pb file, a group of files were generated at the same time, including pipeline.config, model.ckpt.data-00000-of 00001, and its index and meta files, which are different from the original .config and .ckpt files.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.
|
Thank you very much. I found two facts:
Anyway, I just want to clear that the difference between tf versions could create the difference of the accuracy during the conversion of the models.
|
Hi @sammymx Yes, frozen model was generated from model-checkpoint v1. Now as to accuracy, I have less training steps mainly because an observation on MAP/loss showed there was not much improvement after this. While I have not exported the v2 model to a frozen graph and tested in python, I have converted it to the tensorflow.js webmodel format and I confirm it works as expected. See demo here Perhaps try installing an older version of tensorflow e.g. 1.10, export the graph and try again? |
Closing .. |
Hi, victor
I was using the function: export_inference_graph.py, from newest version of tensorflow object detection API, to generate the frozen_inference_graph.pb. (tensorflow version: tf-nightly 1.13..., Ubuntu 18.04)
But the accuracy was so bad both in ssd_mobile_v1 and ssdlite_mobile_v2.
The confidences were inaccurate and the bounding boxes were also not in where they should be.
(A few boxes were in the right place, but the sizes were not accurate, and some confidences of the incorrect boxes were higher than the real hand boxes.)
The results were worse than what the frozen_inference_graph.pb in folder hand_inference_graph made.
Can you please help me to analyse the possible problems?
A wired phenomenon is that when running export_inference_graph.py to generate the frozen pb file, a group of files were generated at the same time, including pipeline.config, model.ckpt.data-00000-of 00001, and its index and meta files, which are different from the original .config and .ckpt files.
The text was updated successfully, but these errors were encountered: