Triton-based implementation of Sparse Mixture of Experts.
☆273Oct 3, 2025Updated 7 months ago
Alternatives and similar repositories for scattermoe
Users that are interested in scattermoe are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆114Aug 26, 2024Updated last year
- LM engine is a library for pretraining/finetuning LLMs☆165Apr 29, 2026Updated last week
- Applied AI experiments and examples for PyTorch☆320Aug 22, 2025Updated 8 months ago
- extensible collectives library in triton☆98Mar 31, 2025Updated last year
- sigma-MoE layer☆21Jan 5, 2024Updated 2 years ago
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆600Aug 12, 2025Updated 8 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆151May 29, 2025Updated 11 months ago
- Transformers components but in Triton☆34May 9, 2025Updated 11 months ago
- Ring attention implementation with flash attention☆1,015Sep 10, 2025Updated 7 months ago
- Fast low-bit matmul kernels in Triton☆446Apr 27, 2026Updated last week
- Tile primitives for speedy kernels☆3,336Apr 29, 2026Updated last week
- ☆92Aug 18, 2024Updated last year
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆351Updated this week
- ☆210Jan 14, 2026Updated 3 months ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆835Mar 6, 2025Updated last year
- ☆265Jul 11, 2024Updated last year
- Odysseus: Playground of LLM Sequence Parallelism☆78Jun 17, 2024Updated last year
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆338Jul 2, 2024Updated last year
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆133Jun 24, 2025Updated 10 months ago
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆226Sep 18, 2025Updated 7 months ago
- A fusion of a linear layer and a cross entropy loss, written for pytorch in triton.☆75Aug 2, 2024Updated last year
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆221Aug 19, 2024Updated last year
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆39Jun 11, 2025Updated 10 months ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,297Aug 28, 2025Updated 8 months ago
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆101Sep 30, 2024Updated last year
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆146Dec 4, 2024Updated last year
- ☆20May 30, 2024Updated last year
- Patch convolution to avoid large GPU memory usage of Conv2D☆97Jan 23, 2025Updated last year
- 🚀 Efficient implementations for emerging model architectures☆5,032Updated this week
- Experiment of using Tangent to autodiff triton☆82Jan 22, 2024Updated 2 years ago
- A throughput-oriented high-performance serving framework for LLMs☆956Mar 29, 2026Updated last month
- Tutel MoE: Optimized Mixture-of-Experts Library, Support GptOss/DeepSeek/Kimi-K2/Qwen3 using FP8/NVFP4/MXFP4☆988Apr 11, 2026Updated 3 weeks ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- ☆124May 28, 2024Updated last year
- A Easy-to-understand TensorOp Matmul Tutorial☆428Mar 5, 2026Updated 2 months ago
- Mirage Persistent Kernel: Compiling LLMs into a MegaKernel☆2,234Updated this week
- A collection of memory efficient attention operators implemented in the Triton language.☆290Jun 5, 2024Updated last year
- ☆325Updated this week
- ring-attention experiments☆166Oct 17, 2024Updated last year
- ☆19Dec 4, 2025Updated 5 months ago