stable-diffusion-webui-forge/packages_3rdparty
layerdiffusion 4c9380c46a Speed up quant model loading and inference ...
... based on 3 evidences:
1. torch.Tensor.view on one big tensor is slightly faster than calling torch.Tensor.to on multiple small tensors.
2. but torch.Tensor.to with dtype change is significantly slower than torch.Tensor.view
3. “baking” model on GPU is significantly faster than computing on CPU when model load.

mainly influence inference of Q8_0, Q4_0/1/K and loading of all quants
2024-08-30 00:49:05 -07:00
..
comfyui_lora_collection multiple lora implementation sources 2024-08-13 07:13:32 -07:00
gguf Speed up quant model loading and inference ... 2024-08-30 00:49:05 -07:00
webui_lora_collection multiple lora implementation sources 2024-08-13 07:13:32 -07:00
README.md multiple lora implementation sources 2024-08-13 07:13:32 -07:00

Please follow the standard of 315f85d4f4/3rdparty when PR or modifying files.