(WIP) Parallel inference for black-forest-labs' FLUX model.
☆19Nov 18, 2024Updated last year
Alternatives and similar repositories for piflux
Users that are interested in piflux are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- A CUDA kernel for NHWC GroupNorm for PyTorch☆23Nov 15, 2024Updated last year
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆22Apr 9, 2026Updated last week
- [WIP] Better (FP8) attention for Hopper☆32Feb 24, 2025Updated last year
- A parallelism VAE avoids OOM for high resolution image generation☆90Mar 12, 2026Updated last month
- ☆20Mar 25, 2025Updated last year
- Serverless GPU API endpoints on Runpod - Bonus Credits • AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- ☆51May 19, 2025Updated 10 months ago
- SGLang Kernel Wheel Index☆20Updated this week
- An auxiliary project analysis of the characteristics of KV in DiT Attention.☆34Nov 29, 2024Updated last year
- LoRAFusion: Efficient LoRA Fine-Tuning for LLMs☆26Apr 8, 2026Updated last week
- https://wavespeed.ai/ Context parallel attention that accelerates DiT model inference with dynamic caching☆426Jul 5, 2025Updated 9 months ago
- faster parallel inference of mochi-1 video generation model☆125Feb 25, 2025Updated last year
- FlexAttention w/ FlashAttention3 Support☆27Oct 5, 2024Updated last year
- ☆20Sep 28, 2024Updated last year
- a simple API to use CUPTI☆10Aug 19, 2025Updated 7 months ago
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- ☆33Feb 3, 2025Updated last year
- ☆105Sep 9, 2024Updated last year
- Implementation from scratch in C of the Multi-head latent attention used in the Deepseek-v3 technical paper.☆18Jan 15, 2025Updated last year
- Transformers components but in Triton☆34May 9, 2025Updated 11 months ago
- 📚A curated list of Awesome Diffusion Inference Papers with Codes: Sampling, Cache, Quantization, Parallelism, etc.🎉☆534Mar 19, 2026Updated 3 weeks ago
- ☆17Apr 9, 2026Updated last week
- ZenID FaceSwap☆220Jul 3, 2025Updated 9 months ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆97Feb 20, 2026Updated last month
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- Lightweight Python Wrapper for OpenVINO, enabling LLM inference on NPUs☆27Dec 17, 2024Updated last year
- [ICLR 2025] FasterCache: Training-Free Video Diffusion Model Acceleration with High Quality☆262Dec 27, 2024Updated last year
- A Suite for Parallel Inference of Diffusion Transformers (DiTs) on multi-GPU Clusters☆58Jul 23, 2024Updated last year
- Bud500: A Comprehensive Vietnamese ASR Dataset☆69Oct 10, 2025Updated 6 months ago
- Tile-based language built for AI computation across all scales☆141Mar 27, 2026Updated 2 weeks ago
- ☆192Jan 14, 2025Updated last year
- We invite you to visit and follow our new repository at https://github.com/microsoft/TileFusion. TiledCUDA is a highly efficient kernel …☆193Jan 28, 2025Updated last year
- https://wavespeed.ai/ Best inference performance optimization framework for HuggingFace Diffusers on NVIDIA GPUs.☆1,306Mar 27, 2025Updated last year
- A lightweight design for computation-communication overlap.☆226Jan 20, 2026Updated 2 months ago
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- Transformation spoken text to written text☆31May 14, 2024Updated last year
- Fast low-bit matmul kernels in Triton☆443Apr 4, 2026Updated last week
- Odysseus: Playground of LLM Sequence Parallelism☆78Jun 17, 2024Updated last year
- DISB is a new DNN inference serving benchmark with diverse workloads and models, as well as real-world traces.☆59Aug 21, 2024Updated last year
- Applied AI experiments and examples for PyTorch☆320Aug 22, 2025Updated 7 months ago
- [CVPR 2024 Highlight] DistriFusion: Distributed Parallel Inference for High-Resolution Diffusion Models☆727Dec 2, 2024Updated last year
- ☆26Feb 17, 2025Updated last year