-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Different results #4
Comments
Hello,
The default learning rate in our implementation is different from what we
use in our paper. To me it looks like it's probably too low -- I'd try
increasing it by an order of magnitude.
Thanks,
Jordan
…On Wed, Jan 6, 2021 at 10:56 AM Vu Trong Hieu ***@***.***> wrote:
Hi authors,
It's great that you publish the source code, but the default
hyper-parameters seem not to be correctly tuned, so could you please share
the hyper-params to produce the graph below (taken from your paper)
I also add the graph showing the results that I ran with hyper-params from
your source code with command
python run.py --model vgg --nQuery 100 --data CIFAR10 --alg badge
Reported in the paper:
[image: image]
<https://user-images.githubusercontent.com/18468247/103787805-a8979e00-5070-11eb-8752-aa00abfa6f89.png>
Run default code: (the x-axis is the number of data point, divided by 100
- the query batch size)
The gap is nearly 10% even with more data
[image: image]
<https://user-images.githubusercontent.com/18468247/103787847-b9e0aa80-5070-11eb-88a6-d1ce00cc4f22.png>
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#4>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AEAIDG7TJN6CU26HDCUDNKLSYSB4VANCNFSM4VXXAY6A>
.
|
Thank you for replying, |
I'm not sure what you mean by task model. Even though different learning
rates may all achieve perfect training accuracy, they often produce
different test accuracies.
…On Wed, Jan 6, 2021 at 10:53 PM Vu Trong Hieu ***@***.***> wrote:
Thank you for replying,
I assume you're taking about the learning rate for training the task
model, but as I understand based on the code here
https://github.com/JordanAsh/badge/blob/master/query_strategies/strategy.py#L63,
the model will be trained until getting 99% accuracy on training set, so
it'll fit the training data without really care about the learning rate (I
also checked the log). Should I change something else?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#4 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AEAIDG55YLAE6265KJGVSBDSYUV2LANCNFSM4VXXAY6A>
.
|
"task model" here I mean the classifier (or the learner), I agree that different learning rates will eventually get different accuracies on test set but the variance should not be that big (10%). |
Hi authors,
It's great that you publish the source code, but the default hyper-parameters seem not to be correctly tuned, so could you please share the hyper-params to produce the graph below (taken from your paper)
I also add the graph showing the results that I ran with hyper-params from your source code with command
python run.py --model vgg --nQuery 100 --data CIFAR10 --alg badge
Reported in the paper:
Run default code: (the
x-axis
is the number of data point, divided by 100 - the query batch size)The gap is nearly 10% even with more data
The text was updated successfully, but these errors were encountered: