chengzeyi / pifluxView external linksLinks
(WIP) Parallel inference for black-forest-labs' FLUX model.
☆18Nov 18, 2024Updated last year
Alternatives and similar repositories for piflux
Users that are interested in piflux are comparing it to the libraries listed below
Sorting:
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- A CUDA kernel for NHWC GroupNorm for PyTorch☆22Nov 15, 2024Updated last year
- [WIP] Better (FP8) attention for Hopper☆32Feb 24, 2025Updated 11 months ago
- Kernel Library Wheel for SGLang☆17Updated this week
- A parallelism VAE avoids OOM for high resolution image generation☆85Aug 4, 2025Updated 6 months ago
- LoRAFusion: Efficient LoRA Fine-Tuning for LLMs☆23Sep 23, 2025Updated 4 months ago
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆22Updated this week
- ☆32Jul 2, 2025Updated 7 months ago
- An auxiliary project analysis of the characteristics of KV in DiT Attention.☆32Nov 29, 2024Updated last year
- ☆52May 19, 2025Updated 8 months ago
- https://wavespeed.ai/ Context parallel attention that accelerates DiT model inference with dynamic caching☆422Jul 5, 2025Updated 7 months ago
- a simple API to use CUPTI☆11Aug 19, 2025Updated 5 months ago
- ☆15Oct 30, 2025Updated 3 months ago
- FlexAttention w/ FlashAttention3 Support☆27Oct 5, 2024Updated last year
- faster parallel inference of mochi-1 video generation model☆125Feb 25, 2025Updated 11 months ago
- Implementation from scratch in C of the Multi-head latent attention used in the Deepseek-v3 technical paper.☆19Jan 15, 2025Updated last year
- ☆34Feb 3, 2025Updated last year
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- Image Artisan XL is the ultimate desktop application for creating amazing images with the power of artificial intelligence.☆18Apr 25, 2024Updated last year
- [ICLR 2025] FasterCache: Training-Free Video Diffusion Model Acceleration with High Quality☆259Dec 27, 2024Updated last year
- Transformers components but in Triton☆34May 9, 2025Updated 9 months ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆96Sep 13, 2025Updated 5 months ago
- ☆105Sep 9, 2024Updated last year
- vLLM performance dashboard☆41Apr 26, 2024Updated last year
- 📚A curated list of Awesome Diffusion Inference Papers with Codes: Sampling, Cache, Quantization, Parallelism, etc.🎉☆518Jan 18, 2026Updated 3 weeks ago
- Handwritten GEMM using Intel AMX (Advanced Matrix Extension)☆17Jan 11, 2025Updated last year
- Multiple GEMM operators are constructed with cutlass to support LLM inference.☆20Aug 3, 2025Updated 6 months ago
- 🚀 LLM-I: Transform LLMs into natural interleaved multimodal creators! ✨ Tool-use framework supporting image search, generation, code ex…☆41Oct 20, 2025Updated 3 months ago
- Tiny-Megatron, a minimalistic re-implementation of the Megatron library☆21Sep 1, 2025Updated 5 months ago
- PyCes (Python Code Scanner) - Enhanced Security Static Analysis Tool for Python☆11Apr 18, 2019Updated 6 years ago
- End-to-end recipes for optimizing diffusion models with torchao and diffusers (inference and FP8 training).☆393Jan 8, 2026Updated last month
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆46Jun 11, 2025Updated 8 months ago
- Lightweight Python Wrapper for OpenVINO, enabling LLM inference on NPUs☆27Dec 17, 2024Updated last year
- ☆79Dec 27, 2024Updated last year
- A lightweight design for computation-communication overlap.☆219Jan 20, 2026Updated 3 weeks ago
- ☆190Jan 14, 2025Updated last year
- ☆85Jan 23, 2025Updated last year
- https://wavespeed.ai/ Best inference performance optimization framework for HuggingFace Diffusers on NVIDIA GPUs.☆1,299Mar 27, 2025Updated 10 months ago
- 使用OpenCV部署CoupledTPS,包含了肖像矫正,不规则边界的图像矩形化,旋转图像矫正,三个模型。依然是包含C++和Python两个版本的程序☆20Jul 4, 2024Updated last year