MadsToftrup / Apollo-devLinks
☆17Updated last year
Alternatives and similar repositories for Apollo-dev
Users that are interested in Apollo-dev are comparing it to the libraries listed below
Sorting:
- Implementation of SmoothCache, a project aimed at speeding-up Diffusion Transformer (DiT) based GenAI models with error-guided caching.☆47Updated 6 months ago
- Making Flux go brrr on GPUs.☆159Updated last month
- [ACL 2023] The official implementation of "CAME: Confidence-guided Adaptive Memory Optimization"☆96Updated 10 months ago
- ☆48Updated 11 months ago
- TerDiT: Ternary Diffusion Models with Transformers☆74Updated last year
- [arXiv] On-device Sora: Enabling Diffusion-Based Text-to-Video Generation for Mobile Devices☆131Updated 2 months ago
- [ICLR 2025] Official PyTorch implmentation of paper "T-Stitch: Accelerating Sampling in Pre-trained Diffusion Models with Trajectory Stit…☆103Updated last year
- 🎬 3.7× faster video generation E2E 🖼️ 1.6× faster image generation E2E ⚡ ColumnSparseAttn 9.3× vs FlashAttn‑3 💨 ColumnSparseGEMM 2.5× …☆101Updated 4 months ago
- ☆30Updated last year
- Minimal repository to demonstrate fast LoRA inference with Flux family of models.☆25Updated 6 months ago
- Patch convolution to avoid large GPU memory usage of Conv2D☆95Updated last year
- [NeurIPS 2024] Learning-to-Cache: Accelerating Diffusion Transformer via Layer Caching☆116Updated last year
- Official implementation of "Parameter-Efficient Orthogonal Finetuning via Butterfly Factorization"☆82Updated last year
- Just another reasonably minimal repo for class-conditional training of pixel-space diffusion transformers.☆143Updated 8 months ago
- Official PyTorch Implementation for Paper "No More Adam: Learning Rate Scaling at Initialization is All You Need"☆55Updated last year
- Writing FLUX in Triton☆41Updated last year
- ☆79Updated last year
- Distilling Diversity and Control in Diffusion Models☆50Updated 9 months ago
- Code for Draft Attention☆99Updated 8 months ago
- FORA introduces simple yet effective caching mechanism in Diffusion Transformer Architecture for faster inference sampling.☆52Updated last year
- Official repository for VQDM:Accurate Compression of Text-to-Image Diffusion Models via Vector Quantization paper☆34Updated last year
- ☆39Updated last year
- [ICLR 2026] SparseD: Sparse Attention for Diffusion Language Models☆57Updated 4 months ago
- [ICML2025] LoRA fine-tune directly on the quantized models.☆39Updated last year
- Focused on fast experimentation and simplicity☆80Updated last year
- [NeurIPS'2024] Invertible Consistency Distillation for Text-Guided Image Editing in Around 7 Steps☆101Updated last year
- Collection of Acceleration Methods for Generative AI☆29Updated last month
- Official implementation for SSDD Single-Step Diffusion Decoder for Efficient Image Tokenization.☆53Updated 2 months ago
- This repository shows how to use Q8 kernels with `diffusers` to optimize inference of LTX-Video on ADA GPUs.☆25Updated last year
- [NeurIPS 2024] AsyncDiff: Parallelizing Diffusion Models by Asynchronous Denoising☆212Updated 4 months ago