Triton-based implementation of Sparse Mixture of Experts.
☆268Oct 3, 2025Updated 5 months ago
Alternatives and similar repositories for scattermoe
Users that are interested in scattermoe are comparing it to the libraries listed below
Sorting:
- ☆115Aug 26, 2024Updated last year
- Applied AI experiments and examples for PyTorch☆319Aug 22, 2025Updated 6 months ago
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆595Aug 12, 2025Updated 6 months ago
- extensible collectives library in triton☆96Mar 31, 2025Updated 11 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆144May 29, 2025Updated 9 months ago
- ☆91Aug 18, 2024Updated last year
- Ring attention implementation with flash attention☆987Sep 10, 2025Updated 5 months ago
- Fast low-bit matmul kernels in Triton☆436Feb 1, 2026Updated last month
- Tile primitives for speedy kernels☆3,202Feb 24, 2026Updated last week
- Transformers components but in Triton☆34May 9, 2025Updated 9 months ago
- sigma-MoE layer☆21Jan 5, 2024Updated 2 years ago
- Odysseus: Playground of LLM Sequence Parallelism☆79Jun 17, 2024Updated last year
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆222Aug 19, 2024Updated last year
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆817Mar 6, 2025Updated last year
- ☆262Jul 11, 2024Updated last year
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆129Jun 24, 2025Updated 8 months ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆327Updated this week
- ☆124May 28, 2024Updated last year
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆336Jul 2, 2024Updated last year
- LM engine is a library for pretraining/finetuning LLMs☆118Feb 23, 2026Updated last week
- ☆20May 30, 2024Updated last year
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆226Sep 18, 2025Updated 5 months ago
- ☆301Updated this week
- ☆19Dec 4, 2025Updated 3 months ago
- A collection of memory efficient attention operators implemented in the Triton language.☆288Jun 5, 2024Updated last year
- Framework to reduce autotune overhead to zero for well known deployments.☆97Sep 19, 2025Updated 5 months ago
- A throughput-oriented high-performance serving framework for LLMs☆947Oct 29, 2025Updated 4 months ago
- ring-attention experiments☆165Oct 17, 2024Updated last year
- Experiment of using Tangent to autodiff triton☆82Jan 22, 2024Updated 2 years ago
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,474Updated this week
- Stick-breaking attention☆62Jul 1, 2025Updated 8 months ago
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆39Jun 11, 2025Updated 8 months ago
- Mirage Persistent Kernel: Compiling LLMs into a MegaKernel☆2,145Feb 23, 2026Updated last week
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆1,025Sep 4, 2024Updated last year
- A Easy-to-understand TensorOp Matmul Tutorial☆409Updated this week
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆143Dec 4, 2024Updated last year
- A fusion of a linear layer and a cross entropy loss, written for pytorch in triton.☆75Aug 2, 2024Updated last year
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆101Sep 30, 2024Updated last year
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,264Aug 28, 2025Updated 6 months ago