this post was submitted on 08 Aug 2023
8 points (100.0% liked)

Stable Diffusion

4324 readers
15 users here now

Discuss matters related to our favourite AI Art generation technology

Also see

Other communities

founded 1 year ago
MODERATORS
 

I have been running 1.4, 1.5, 2 without issue - but everytime I try to run SDXL 1.0 (via Invoke, or Auto1111) it will not load the checkpoint.

I have the official hugging face version of the checkpoint, refiner, lora offset and VAE. They are all named properly to match how they need to. They are all in the appropriate folders. When I pick the model to load, it tries for about 20 seconds, then pops a super long error in the python instance and defaults to the last model I loaded. Oddly, it loads the refiner without issue.

Is this a case of my 8gb vram just not being enough? I have tried with the no-half/full precision arguments.

top 17 comments
sorted by: hot top controversial new old
[–] RotaryKeyboard@lemmy.ninja 4 points 1 year ago (1 children)

I had issues before I updated A1111. Do a git pull in the A111 directory and try again.

[–] Thanks4Nothing@lemm.ee 1 points 1 year ago (1 children)

I have mine auto set to git pull each load. Can you confirm, since you have it working...what files do I actually need - is it just the base, refiner, lora and vae?

[–] RotaryKeyboard@lemmy.ninja 2 points 1 year ago (1 children)

I’m not an expert, but what I read said that you use SDXL by first using txt2img to generate an image using the base checkpoint, and then you send that image to img2img and use exactly the same prompt there with the refiner checkpoint.

That makes for a longer workflow than I’m used to, so sometimes I just use one or the other in txt2img and see what I get. Sometimes I forget to change the model when I switch between img2img and txt2img, too. I always seem to get results of similar quality when I use just one of the checkpoints.

It should be interesting to see what people come up with training their own checkpoints off of SDXL, though.

[–] Thanks4Nothing@lemm.ee 1 points 1 year ago (2 children)

Good point. I watched a Nerdy Rodent video about installing it, and he showed that he used the sdxl_base_vae and sdxl_refiner_vae safetensors, and that is all he copied over. No other files. I went back to the repository and pulled those two file and put them in my checkpoint folder. I reloaded my web user bat file and I got the new checkpoint to load. It took about a minute. I got one image to generate at 1024x1024 but it took about 3 minutes to generate. It looked normal, but I cannot help but think it should be a bit faster than that. But then I noticed my whole machine tanked when running it. It bogged down all 32gb of my ram, and it was showing my gpu was barely doing anything. Maybe there is some kind of memory leak. I may have to check my gpu drivers to see if something is going on.

Are those vae safetensors the only files I need? The tuturial didn't talk about the lora offset or the vae files...so I didn't add them this last time.

[–] RotaryKeyboard@lemmy.ninja 1 points 1 year ago

Those safetensors files are all that I have ever used.

For reference, I'm using a 2080 ti. That's got about 11 GB of RAM, I think. I'm not having any freezes whatsoever. I've also tried it on my wife's shiny new 4080. Definitely a speed difference, but again, no freezes or instability. Generating the 1024x1024 images does take forever. I actually went back to 512x512 and stayed there. I can always upscale something that I like.

[–] jollyroberts@fosstodon.org 0 points 1 year ago (1 children)

@Thanks4Nothing @RotaryKeyboard
can you link that video?

ive not managed to get SDXL to work yet and figured i just did not know the right steps.

[–] Thanks4Nothing@lemm.ee 1 points 1 year ago

I was having a hard time finding it again...turns out it was the AI-trepeneur channel. At first it does seem like he's just going to point everyone towards his patreon, but he does go into the manual process later on in the video.

https://youtu.be/rtUpIY9Opjs

[–] Stampela@startrek.website 3 points 1 year ago

3060 here, it might be the vram. SDXL eats a lot of it (and if you had say the vae in the wrong spot it would output very wrong images) so it might be that either 8gb aren't enough, or maybe they aren't enough with the resolution of your screen plus whatever you are running, like the browser.

Or, OR: the checkpoint is corrupted. I had that happen a couple of times in the past and the whole huge error with loading of another model was what happened.

[–] chicken@lemmy.dbzer0.com 2 points 1 year ago (2 children)

I'm not sure why, but I have 8GB vram and my experience with this has been the same as others who describe that SDXL will not run with Auto1111 but it will work with ComfyUI. So I think this is not purely a vram issue.

[–] Thanks4Nothing@lemm.ee 1 points 1 year ago

Yeah it's very odd. I tried comfyUI. I but the interface just doesn't click with me.

I keep waiting for invoke AI to have an auto installer for that model but they are still only offering the SDXL .9 and I don't have a token for that model.

Auto1111 might be trying to load multiple models at the same time, which it does not have room for.

[–] Novman@feddit.it 2 points 1 year ago (1 children)

Nvidia has problem wih newest drivers, auto 111 give out of memory, comfyui works smootly with your card.

[–] Thanks4Nothing@lemm.ee 1 points 1 year ago (1 children)

Is there a driver version that I can downgrade to? I just cannot do Comfy. I gave it a good try. I have used GRisk, Auto, and Invoke and liked them all - I just cannot get used to Comfy.

[–] Novman@feddit.it 1 points 1 year ago
[–] whitecapstromgard@sh.itjust.works 1 points 1 year ago (1 children)

SDXL is very memory hungry. Most base models are around 6-7 GB, which doesn't leave much room for anything else.

[–] Thanks4Nothing@lemm.ee 2 points 1 year ago (1 children)

Thanks. Oddly enough, the most recent release of InvokeAI fixed the problem I was having. My 8gb 3070 can run SDXL in about 30 seconds now. It seems to take a little bit to clear everything in-between generations though. I want to move up to a 12/24 gb GPU, but am waiting/hoping for the price crash.