This is an automated archive made by the Lemmit Bot.
The original was posted on /r/stablediffusion by /u/HadesThrowaway on 2024-11-16 03:03:10+00:00.
For those that have not heard of KoboldCpp, it's a lightweight, single-executable standalone tool with no installation required and no dependencies, for running text-generation and image-generation models locally with low-end hardware (based on llama.cpp and stable-diffusion.cpp).
About 6 months ago, KoboldCpp added support for SD1.5 and SDXL local image generation
Now, with the latest release, usage of Flux and SD3.5 large/medium models are now supported! Sure, ComfyUI may be more powerful and versatile, but KoboldCpp allows image gen with a single .exe file with no installation needed. Considering A1111 is basically dead, and Forge still hasn't added SD3.5 support to the main branch, I thought people might be interested to give this a try.
Note that loading full fp16 Flux will take over 20gb VRAM, so select "Compress Weights" if you have less GPU mem than that and are loading safetensors (at the expense of load time). Compatible with most flux/sd3.5 models out there, though pre-quantized GGUFs will load faster since runtime compression is avoided.
Details and instructions are in the release notes. Check it out here: https://github.com/LostRuins/koboldcpp/releases/latest