Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

One theoretical question. #7

Closed
Mr-Nobody-dey opened this issue Dec 10, 2024 · 2 comments
Closed

One theoretical question. #7

Mr-Nobody-dey opened this issue Dec 10, 2024 · 2 comments

Comments

@Mr-Nobody-dey
Copy link

In all GAN training, their is one generator and one discriminator.

What will happen if I train such model in two stages.

Note: The generator has two part, 1. encoder-decoder part, 2. Id injection part.

Stage 1: In first stage, I train the generator to recreate the same input with any simple loss say MAE. Add id of same person.
This is possible, since it is a supervised problem.

Once the first stage is complete.

Stage 2: In the second stage, we freeze the encoder-decoder part, and train the model as it is generally trained (such as here).

Is their anything fundamentally wrong, that I am missing?

By the way, in line 395 of train_adapter.py, I think id_emb will be the other one, "nn.utils.clip_grad_norm_(ID_emb.parameters(), 2.0)".

@bone-11
Copy link
Collaborator

bone-11 commented Dec 17, 2024

I don't quite understand your point accurately. Generally, GAN training is divided into two stages. For example, in the first stage, the generator is fixed and only the discriminator is trained, while in the second stage, the discriminator is fixed and the generator is trained. The reason for dividing it into two stages is that the objective losses are different.

@Mr-Nobody-dey
Copy link
Author

Never mind, I was just wandering.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants