huggingface.co/docs/diffusers/training/lora
2 Users
0 Comments
15 Highlights
0 Notes
Tags
Top Highlights
cloneofsimo
consuming less memory
model weights
finetune stable-diffusion-v1-5
accelerates the training of large models
trains
newly added weights
Text-to-image
With LoRA, it is much easier and faster to finetune a diffusion model.
on the Pokémon BLIP captions
dataset you want to train on
Load the LoRA weights from your finetuned model on top of the base model weights,
Glasp is a social web highlighter that people can highlight and organize quotes and thoughts from the web, and access other like-minded people’s learning.