Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

the face of the generated image is very similar with the face in target image #14

Open
PapaMadeleine2022 opened this issue Nov 11, 2019 · 7 comments

Comments

@PapaMadeleine2022
Copy link

hello, I have a problem. As we know, when giving a source image A and another target image B, ganimation can produce a generated image C in which its facial expression is similar with the target image, but there is also a side effect that the generated face C has some contents of the target face B, which is not what we want(that is the face of the generated image C is very similar with the face in target image B). So what method there is to eliminate this side effect ?

Anyone can give some advises? @donydchen
Thanks.

@donydchen
Copy link
Owner

Hi, @IvyGongoogle , if you read the original paper and the implementation here carefully, you might notice that the target image B will NOT be directly fed into the network. Instead, only the Action Unit vector extracted from B will be sent into the network, which enables the network to capture the emotional feature from image B without being affected by the identity of B.

Besides, the paper also proposes to use a cycle consistency loss to force the generated face to maintain its identity. So I guess increasing the weight lambda_idt for the identity loss may also help, to some extend.

Given that B is not directly fed into the network and there is an identity loss function, to my knowledge, the generated face is not supposed to be similar to B. Would you mind posting some images here to illustrate the cases you have come across? That's quite interesting.

@PapaMadeleine2022
Copy link
Author

PapaMadeleine2022 commented Nov 12, 2019

@donydchen Thanks for your reply. I upload a bad case here. As we can see that the generated.gif has some brown color(which is the main color of B.jpg) around the mouth.
I use OpenFace to get the AUs of B:


face, confidence, AU01_r, AU02_r, AU04_r, AU05_r, AU06_r, AU07_r, AU09_r, AU10_r, AU12_r, AU14_r, AU15_r, AU17_r, AU20_r, AU23_r, AU25_r, AU26_r, AU45_r, AU01_c, AU02_c, AU04_c, AU05_c, AU06_c, AU07_c, AU09_c, AU10_c, AU12_c, AU14_c, AU15_c, AU17_c, AU20_c, AU23_c, AU25_c, AU26_c, AU28_c, AU45_c
0, 0.975, 0.21, 0.00, 1.09, 0.20, 0.00, 0.00, 0.00, 0.07, 0.00, 0.00, 0.64, 1.61, 0.00, 0.02, 0.53, 0.57, 0.00, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0

I use your Test script:

python main.py --mode test --data_root /xxx/ --batch_size 1 --ckpt_dir ckpts/emotionNet/ganimation/190327_160828/ --load_epoch 30 --save_test_gif

And I do modify other parameters or configs.

Thanks

@PapaMadeleine2022
Copy link
Author

@donydchen any idea?

@donydchen
Copy link
Owner

Hi @IvyGongoogle, the majority of the training dataset, either EmotionNet or CelebA, belong to the Caucasian race, while the input image in your provided case looks more or less like an Asian. Deep learning models do suffer a lot from the data distribution issues... I guess this is the main reason why your case fails.
An intuitive solution would be to re-train the model with an Asian emotional face dataset, although I'm not sure whether this kind of dataset exist or not.

@PapaMadeleine2022
Copy link
Author

@donydchen Thanks for your advice.
I add another bad case2 here . In this case the source A image is from the CelebA dataset, but the generated.gif is still bad. So it seems that ganimation can handle with the Asian face, and the reason for this type of bad case is caused by the target B image?

@donydchen
Copy link
Owner

Hi, @IvyGongoogle , personally, I think the second example you provided could be considered as a successful case to some extent. At least we can see that the input face is gradually changing from Happy to Sad. As for those artefacts, probably it could be caused by the training time, or dataset size.
My personal advice is to retrain the model with the whole EmotionNet dataset and more training epochs. Kindly note that only around half of the EmotionNet dataset is used in this project due to limited time. Hope it help.

@PapaMadeleine2022
Copy link
Author

@donydchen thanks for your advises.
Now I can not get the dataset EmotionNet dataset so that I can not train a model on it with more training epochs. This is a sad thing, but still appreciate of you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants