shawntan / scattermoeLinks
Triton-based implementation of Sparse Mixture of Experts.
☆238Updated 2 weeks ago
Alternatives and similar repositories for scattermoe
Users that are interested in scattermoe are comparing it to the libraries listed below
Sorting:
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆216Updated last year
- ☆110Updated last year
- ☆141Updated 6 months ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆82Updated last year
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆165Updated last year
- PyTorch bindings for CUTLASS grouped GEMM.☆113Updated 3 months ago
- ring-attention experiments☆150Updated 10 months ago
- This repository contains the experimental PyTorch native float8 training UX☆224Updated last year
- Best practices for training DeepSeek, Mixtral, Qwen and other MoE models using Megatron Core.☆86Updated this week
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆265Updated last month
- ☆149Updated 2 years ago
- Applied AI experiments and examples for PyTorch☆295Updated 3 weeks ago
- Collection of kernels written in Triton language☆154Updated 5 months ago
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆181Updated 2 months ago
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆209Updated last week
- Explorations into some recent techniques surrounding speculative decoding☆285Updated 8 months ago
- ☆124Updated 3 months ago
- Fast low-bit matmul kernels in Triton☆357Updated this week
- [ICLR 2025] Palu: Compressing KV-Cache with Low-Rank Projection☆138Updated 6 months ago
- extensible collectives library in triton☆88Updated 5 months ago
- Triton implementation of FlashAttention2 that adds Custom Masks.☆134Updated last year
- Cataloging released Triton kernels.☆252Updated this week
- ☆118Updated last year
- Cold Compress is a hackable, lightweight, and open-source toolkit for creating and benchmarking cache compression methods built on top of…☆145Updated last year
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆124Updated 9 months ago
- [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization☆377Updated last year
- A Quirky Assortment of CuTe Kernels☆450Updated last week
- REST: Retrieval-Based Speculative Decoding, NAACL 2024☆207Updated 9 months ago
- ☆47Updated last year
- Fast Hadamard transform in CUDA, with a PyTorch interface☆231Updated last week