(WIP) Parallel inference for black-forest-labs' FLUX model.
☆19Nov 18, 2024Updated last year
Alternatives and similar repositories for piflux
Users that are interested in piflux are comparing it to the libraries listed below
Sorting:
- A CUDA kernel for NHWC GroupNorm for PyTorch☆23Nov 15, 2024Updated last year
- [WIP] Better (FP8) attention for Hopper☆32Feb 24, 2025Updated last year
- Kernel Library Wheel for SGLang☆16Updated this week
- A parallelism VAE avoids OOM for high resolution image generation☆85Aug 4, 2025Updated 7 months ago
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆21Feb 9, 2026Updated 3 weeks ago
- ☆32Jul 2, 2025Updated 8 months ago
- An auxiliary project analysis of the characteristics of KV in DiT Attention.☆33Nov 29, 2024Updated last year
- ☆52May 19, 2025Updated 9 months ago
- https://wavespeed.ai/ Context parallel attention that accelerates DiT model inference with dynamic caching☆424Jul 5, 2025Updated 8 months ago
- Estimate MFU for DeepSeekV3☆26Jan 5, 2025Updated last year
- ☆16Feb 24, 2026Updated last week
- FlexAttention w/ FlashAttention3 Support☆27Oct 5, 2024Updated last year
- faster parallel inference of mochi-1 video generation model☆125Feb 25, 2025Updated last year
- ☆34Feb 3, 2025Updated last year
- Implementation from scratch in C of the Multi-head latent attention used in the Deepseek-v3 technical paper.☆18Jan 15, 2025Updated last year
- Image Artisan XL is the ultimate desktop application for creating amazing images with the power of artificial intelligence.☆18Apr 25, 2024Updated last year
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- [ICLR 2025] FasterCache: Training-Free Video Diffusion Model Acceleration with High Quality☆261Dec 27, 2024Updated last year
- Transformers components but in Triton☆34May 9, 2025Updated 9 months ago
- ☆104Sep 9, 2024Updated last year
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆96Feb 20, 2026Updated 2 weeks ago
- 📚A curated list of Awesome Diffusion Inference Papers with Codes: Sampling, Cache, Quantization, Parallelism, etc.🎉☆525Feb 25, 2026Updated last week
- vLLM performance dashboard☆42Apr 26, 2024Updated last year
- Tile-based language built for AI computation across all scales☆138Feb 27, 2026Updated last week
- Handwritten GEMM using Intel AMX (Advanced Matrix Extension)☆17Jan 11, 2025Updated last year
- PyCes (Python Code Scanner) - Enhanced Security Static Analysis Tool for Python☆11Apr 18, 2019Updated 6 years ago
- 🚀 LLM-I: Transform LLMs into natural interleaved multimodal creators! ✨ Tool-use framework supporting image search, generation, code ex…☆41Oct 20, 2025Updated 4 months ago
- Multiple GEMM operators are constructed with cutlass to support LLM inference.☆21Aug 3, 2025Updated 7 months ago
- Tiny-Megatron, a minimalistic re-implementation of the Megatron library☆23Sep 1, 2025Updated 6 months ago
- Odysseus: Playground of LLM Sequence Parallelism☆79Jun 17, 2024Updated last year
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆46Jun 11, 2025Updated 8 months ago
- End-to-end recipes for optimizing diffusion models with torchao and diffusers (inference and FP8 training).☆396Jan 8, 2026Updated last month
- ☆79Dec 27, 2024Updated last year
- Lightweight Python Wrapper for OpenVINO, enabling LLM inference on NPUs☆27Dec 17, 2024Updated last year
- A lightweight design for computation-communication overlap.☆223Jan 20, 2026Updated last month
- ☆87Jan 23, 2025Updated last year
- ☆191Jan 14, 2025Updated last year
- https://wavespeed.ai/ Best inference performance optimization framework for HuggingFace Diffusers on NVIDIA GPUs.☆1,303Mar 27, 2025Updated 11 months ago
- APEX+ is an LLM Serving Simulator☆42Jun 16, 2025Updated 8 months ago