this post was submitted on 09 Nov 2024
1 points (100.0% liked)

Stable Diffusion

4308 readers
1 users here now

Discuss matters related to our favourite AI Art generation technology

Also see

Other communities

founded 1 year ago
MODERATORS
 

TL;DR

A new post-training training quantization paradigm for diffusion models, which quantize both the weights and activations of FLUX.1 to 4 bits, achieving 3.5× memory and 8.7× latency reduction on a 16GB laptop 4090 GPU.

Paper: http://arxiv.org/abs/2411.05007

Weights: https://huggingface.co/mit-han-lab/svdquant-models

Code: https://github.com/mit-han-lab/nunchaku

Blog: https://hanlab.mit.edu/blog/svdquant

Demo: https://svdquant.mit.edu/

no comments (yet)
sorted by: hot top controversial new old
there doesn't seem to be anything here