Triton-based implementation of Sparse Mixture of Experts.
☆274Oct 3, 2025Updated 6 months ago
Alternatives and similar repositories for scattermoe
Users that are interested in scattermoe are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆114Aug 26, 2024Updated last year
- LM engine is a library for pretraining/finetuning LLMs☆163Apr 8, 2026Updated last week
- Applied AI experiments and examples for PyTorch☆320Aug 22, 2025Updated 7 months ago
- extensible collectives library in triton☆98Mar 31, 2025Updated last year
- sigma-MoE layer☆21Jan 5, 2024Updated 2 years ago
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆599Aug 12, 2025Updated 8 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆150May 29, 2025Updated 10 months ago
- Transformers components but in Triton☆34May 9, 2025Updated 11 months ago
- Ring attention implementation with flash attention☆1,006Sep 10, 2025Updated 7 months ago
- Fast low-bit matmul kernels in Triton☆443Apr 4, 2026Updated last week
- Tile primitives for speedy kernels☆3,312Apr 8, 2026Updated last week
- ☆91Aug 18, 2024Updated last year
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆343Updated this week
- ☆210Jan 14, 2026Updated 3 months ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆826Mar 6, 2025Updated last year
- ☆261Jul 11, 2024Updated last year
- Odysseus: Playground of LLM Sequence Parallelism☆78Jun 17, 2024Updated last year
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆336Jul 2, 2024Updated last year
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆133Jun 24, 2025Updated 9 months ago
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆226Sep 18, 2025Updated 6 months ago
- A fusion of a linear layer and a cross entropy loss, written for pytorch in triton.☆75Aug 2, 2024Updated last year
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆222Aug 19, 2024Updated last year
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,286Aug 28, 2025Updated 7 months ago
- Deploy open-source AI quickly and easily - Bonus Offer • AdRunpod Hub is built for open source. One-click deployment and autoscaling endpoints without provisioning your own infrastructure.
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆39Jun 11, 2025Updated 10 months ago
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆101Sep 30, 2024Updated last year
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆145Dec 4, 2024Updated last year
- ☆20May 30, 2024Updated last year
- 🚀 Efficient implementations for emerging model architectures☆4,878Updated this week
- Patch convolution to avoid large GPU memory usage of Conv2D☆96Jan 23, 2025Updated last year
- Experiment of using Tangent to autodiff triton☆82Jan 22, 2024Updated 2 years ago
- A throughput-oriented high-performance serving framework for LLMs☆952Mar 29, 2026Updated 2 weeks ago
- Tutel MoE: Optimized Mixture-of-Experts Library, Support GptOss/DeepSeek/Kimi-K2/Qwen3 using FP8/NVFP4/MXFP4☆980Updated this week
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- ☆124May 28, 2024Updated last year
- Mirage Persistent Kernel: Compiling LLMs into a MegaKernel☆2,184Updated this week
- A Easy-to-understand TensorOp Matmul Tutorial☆423Mar 5, 2026Updated last month
- A collection of memory efficient attention operators implemented in the Triton language.☆289Jun 5, 2024Updated last year
- ☆315Mar 31, 2026Updated 2 weeks ago
- ring-attention experiments☆165Oct 17, 2024Updated last year
- ☆19Dec 4, 2025Updated 4 months ago