You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In Chinese text corpus, we can generate some adversarial examples by random insertion(RI), random deletion(RD) or synonym replacement(SR). I am wondering whether EDA method will cause the model such text classifier to be attacked by the adversarial examples generated by RI, RD or SR like EDA does.
Can you explain this? Because I did some experiments and they show a decrease in performance.
Thank you very much!
The text was updated successfully, but these errors were encountered:
I'm surprised to hear that you saw a decrease in performance, which repository did you use?
What synonym dictionary are you using?
What's the size of the training set?
You can see the tSNE figure in our paper to see how augmented sentences relate to original sentences. Generally, it seems like examples from EDA don't change the ground-truth label.
In Chinese text corpus, we can generate some adversarial examples by random insertion(RI), random deletion(RD) or synonym replacement(SR). I am wondering whether EDA method will cause the model such text classifier to be attacked by the adversarial examples generated by RI, RD or SR like EDA does.
Can you explain this? Because I did some experiments and they show a decrease in performance.
Thank you very much!
The text was updated successfully, but these errors were encountered: