OFA-Sys / diffusion-deployLinks
☆54Updated 2 years ago
Alternatives and similar repositories for diffusion-deploy
Users that are interested in diffusion-deploy are comparing it to the libraries listed below
Sorting:
- Simple large-scale training of stable diffusion with multi-node support.☆133Updated 2 years ago
- ☆100Updated last year
- Faster generation with text-to-image diffusion models.☆214Updated 8 months ago
- [WIP] Better (FP8) attention for Hopper☆30Updated 4 months ago
- ☆118Updated 2 years ago
- ☆49Updated last year
- Tiny optimized Stable-diffusion that can run on GPUs with just 1GB of VRAM. (Beta)☆172Updated last year
- Iterable datapipelines for pytorch training.☆83Updated 9 months ago
- Minimal Differentiable Image Reward Functions☆60Updated 2 months ago
- This is a Gradio WebUI working with the Diffusers format of Stable Diffusion☆81Updated 2 years ago
- Diffusion Reinforcement Learning Library☆187Updated last year
- The official implementation of Latte: Latent Diffusion Transformer for Video Generation.☆33Updated 4 months ago
- 🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch☆50Updated 2 years ago
- Deploy stable diffusion model with onnx/tenorrt + tritonserver☆123Updated last year
- SSD-1B, an open-source text-to-image model, outperforming previous versions by being 50% smaller and 60% faster than SDXL.☆177Updated last year
- This repository implements the idea of "caption upsampling" from DALL-E 3 with Zephyr-7B and gathers results with SDXL.☆153Updated last year
- Patch convolution to avoid large GPU memory usage of Conv2D☆88Updated 5 months ago
- ☆18Updated last year
- Educational repository for applying the main video data curation techniques presented in the Stable Video Diffusion paper.☆82Updated last year
- ☆171Updated last year
- Huggingface-compatible SDXL Unet implementation that is readily hackable☆424Updated last year
- Recaption large (Web)Datasets with vllm and save the artifacts.☆52Updated 7 months ago
- Flux diffusion model implementation using quantized fp8 matmul & remaining layers use faster half precision accumulate, which is ~2x fast…☆269Updated 8 months ago
- Writing FLUX in Triton☆34Updated 9 months ago
- ☆1Updated 4 months ago
- Let's try and finetune the OpenAI consistency decoder to work for SDXL☆24Updated last year
- [ACL 2023] The official implementation of "CAME: Confidence-guided Adaptive Memory Optimization"☆92Updated 3 months ago
- Code for instruction-tuning Stable Diffusion.☆235Updated last year
- ☆73Updated 2 years ago
- Repository with which to explore k-diffusion and diffusers, and within which changes to said packages may be tested.☆53Updated last year