Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Train a full checkpoint - not LoRa #110

Open
Samael-1976 opened this issue Jun 12, 2024 · 2 comments
Open

Train a full checkpoint - not LoRa #110

Samael-1976 opened this issue Jun 12, 2024 · 2 comments

Comments

@Samael-1976
Copy link

Hi, I'm really curious about pixart.

I've just fee question:
Premiss: I will use OneTrainer

  1. There is a tutorial for the training?
  2. I'm with 2060 with 12gb of vram, I will able to train a 1024 model (just for saying that I found a way to train sdxl with my card).
  3. Any other suggestion to give me?

thank you for any answer!

Samuele

PS: sorry for my horrible english

@chrish-slingshot
Copy link

Hey. Your best bet for training is to use OneTrainer - they just added PixArt Sigma.

https://github.com/Nerogar/OneTrainer

I don't think there's a tutorial for the training but if you hit me up on Discord I can give you some pointers - username _.crash._. There's also the PixArt discord:

https://discord.gg/MfZFVKfCwD

12GB should be enough. You won't be able to train the text encoder, but then no reasonable commercial system can at the moment. I'm running a 1024 fine-tune at the moment with a batch size of 16 and hitting 17GB, so with a lower batch size (i.e. longer training time) you should be fine.

@Samael-1976
Copy link
Author

A thousand thanks!
I just sent you a friend request on discord (I'm Samael1976).

Any help you can give me will be well accepted, since I know very little about pixart. But since it came out I've only read great things and it's always intrigued me.

I already use OneTrainer for XL training :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants