xdit-project / xDiT
xDiT: A Scalable Inference Engine for Diffusion Transformers (DiTs) with Massive Parallelism
☆1,276Updated last week
Alternatives and similar repositories for xDiT:
Users that are interested in xDiT are comparing it to the libraries listed below
- Quantized Attention that achieves speedups of 2.1-3.1x and 2.7-5.1x compared to FlashAttention2 and xformers, respectively, without lossi…☆964Updated last week
- [ICLR2025 Spotlight] SVDQuant: Absorbing Outliers by Low-Rank Components for 4-Bit Diffusion Models☆679Updated this week
- [CVPR 2024 Highlight] DistriFusion: Distributed Parallel Inference for High-Resolution Diffusion Models☆654Updated 2 months ago
- 📖A curated list of Awesome Diffusion Inference Papers with codes: Sampling, Caching, Multi-GPUs, etc. 🎉🎉☆189Updated last month
- VideoSys: An easy and efficient system for video generation☆1,926Updated last month
- Model Compression Toolbox for Large Language Models and Diffusion Models☆330Updated this week
- [CVPR 2024] DeepCache: Accelerating Diffusion Models for Free☆851Updated 7 months ago
- Best inference performance optimization framework for HuggingFace Diffusers on NVIDIA GPUs.☆1,227Updated 2 months ago
- Infinity ∞ : Scaling Bitwise AutoRegressive Modeling for High-Resolution Image Synthesis☆968Updated this week
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆428Updated this week
- FastVideo is a lightweight framework for accelerating large video diffusion models.☆1,095Updated this week
- Context parallel attention that accelerates DiT model inference with dynamic caching☆189Updated this week
- Next-Token Prediction is All You Need☆2,004Updated 3 months ago
- A PyTorch Native LLM Training Framework☆732Updated last month
- FlashInfer: Kernel Library for LLM Serving☆2,111Updated this week
- HART: Efficient Visual Generation with Hybrid Autoregressive Transformer☆418Updated 4 months ago
- A highly optimized LLM inference acceleration engine for Llama and its variants.☆855Updated this week
- Memory-optimized training scripts for video models based on Diffusers☆868Updated this week
- mllm-npu: training multimodal large language models on Ascend NPUs☆90Updated 5 months ago
- Mirage: Automatically Generating Fast GPU Kernels without Programming in Triton/CUDA☆746Updated this week
- Ring attention implementation with flash attention☆677Updated this week
- ☆144Updated last month
- Official Implementation of "Lumina-mGPT: Illuminate Flexible Photorealistic Text-to-Image Generation with Multimodal Generative Pretraini…☆543Updated 6 months ago
- [ICLR 2025] Repository for Show-o, One Single Transformer to Unify Multimodal Understanding and Generation.☆1,220Updated last week
- Identity-Preserving Text-to-Video Generation by Frequency Decomposition☆588Updated this week
- FlagScale is a large model toolkit based on open-sourced projects.☆223Updated this week
- 📚 Collection of awesome generation acceleration resources.☆139Updated this week
- [CVPR2024 Highlight] VBench - We Evaluate Video Generation☆761Updated this week
- [NeurIPS'24 Spotlight, ICLR'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention, which r…☆917Updated last week
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆512Updated this week