Triton-based implementation of Sparse Mixture of Experts.
☆270Oct 3, 2025Updated 5 months ago
Alternatives and similar repositories for scattermoe
Users that are interested in scattermoe are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆115Aug 26, 2024Updated last year
- Applied AI experiments and examples for PyTorch☆319Aug 22, 2025Updated 7 months ago
- extensible collectives library in triton☆97Mar 31, 2025Updated 11 months ago
- sigma-MoE layer☆21Jan 5, 2024Updated 2 years ago
- LM engine is a library for pretraining/finetuning LLMs☆136Mar 18, 2026Updated last week
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆598Aug 12, 2025Updated 7 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆147May 29, 2025Updated 9 months ago
- Transformers components but in Triton☆34May 9, 2025Updated 10 months ago
- Ring attention implementation with flash attention☆998Sep 10, 2025Updated 6 months ago
- Fast low-bit matmul kernels in Triton☆438Feb 1, 2026Updated last month
- Tile primitives for speedy kernels☆3,244Mar 17, 2026Updated last week
- ☆91Aug 18, 2024Updated last year
- ☆208Jan 14, 2026Updated 2 months ago
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆821Mar 6, 2025Updated last year
- NordVPN Special Discount Offer • AdSave on top-rated NordVPN 1 or 2-year plans with secure browsing, privacy protection, and support for for all major platforms.
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆332Updated this week
- ☆261Jul 11, 2024Updated last year
- Odysseus: Playground of LLM Sequence Parallelism☆79Jun 17, 2024Updated last year
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆336Jul 2, 2024Updated last year
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆132Jun 24, 2025Updated 9 months ago
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆226Sep 18, 2025Updated 6 months ago
- A fusion of a linear layer and a cross entropy loss, written for pytorch in triton.☆75Aug 2, 2024Updated last year
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆222Aug 19, 2024Updated last year
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,273Aug 28, 2025Updated 6 months ago
- NordVPN Special Discount Offer • AdSave on top-rated NordVPN 1 or 2-year plans with secure browsing, privacy protection, and support for for all major platforms.
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆39Jun 11, 2025Updated 9 months ago
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆101Sep 30, 2024Updated last year
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆145Dec 4, 2024Updated last year
- ☆20May 30, 2024Updated last year
- Patch convolution to avoid large GPU memory usage of Conv2D☆95Jan 23, 2025Updated last year
- Experiment of using Tangent to autodiff triton☆82Jan 22, 2024Updated 2 years ago
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,692Updated this week
- A throughput-oriented high-performance serving framework for LLMs☆950Oct 29, 2025Updated 4 months ago
- Tutel MoE: Optimized Mixture-of-Experts Library, Support GptOss/DeepSeek/Kimi-K2/Qwen3 using FP8/NVFP4/MXFP4☆978Mar 6, 2026Updated 2 weeks ago
- End-to-end encrypted cloud storage - Proton Drive • AdSpecial offer: 40% Off Yearly / 80% Off First Month. Protect your most important files, photos, and documents from prying eyes.
- Mirage Persistent Kernel: Compiling LLMs into a MegaKernel☆2,159Mar 19, 2026Updated last week
- ☆124May 28, 2024Updated last year
- A Easy-to-understand TensorOp Matmul Tutorial☆422Mar 5, 2026Updated 3 weeks ago
- A collection of memory efficient attention operators implemented in the Triton language.☆288Jun 5, 2024Updated last year
- ☆310Updated this week
- ring-attention experiments☆168Oct 17, 2024Updated last year
- ☆19Dec 4, 2025Updated 3 months ago