samsja / muon_fsdp_2Links
Muon fsdp 2
☆44Updated 3 months ago
Alternatives and similar repositories for muon_fsdp_2
Users that are interested in muon_fsdp_2 are comparing it to the libraries listed below
Sorting:
- The evaluation framework for training-free sparse attention in LLMs☆102Updated last month
- A fusion of a linear layer and a cross entropy loss, written for pytorch in triton.☆70Updated last year
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆206Updated 4 months ago
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆92Updated 3 months ago
- ☆121Updated last year
- ☆53Updated last year
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆130Updated 11 months ago
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆60Updated last year
- ☆83Updated last year
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆84Updated last year
- A toolkit for scaling law research ⚖☆53Updated 9 months ago
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆40Updated 3 weeks ago
- ☆87Updated last year
- ☆130Updated 5 months ago
- Here we will test various linear attention designs.☆61Updated last year
- Using FlexAttention to compute attention with different masking patterns☆47Updated last year
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆124Updated 4 months ago
- Language models scale reliably with over-training and on downstream tasks☆100Updated last year
- Fast and memory-efficient exact attention☆74Updated 8 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆171Updated 4 months ago
- Simple and efficient pytorch-native transformer training and inference (batched)☆78Updated last year
- Code for studying the super weight in LLM☆119Updated 11 months ago
- ☆41Updated last week
- Stick-breaking attention☆61Updated 4 months ago
- ☆91Updated last year
- ☆14Updated 4 months ago
- ☆56Updated last year
- Transformers components but in Triton☆34Updated 6 months ago
- [NeurIPS 2024] Low rank memory efficient optimizer without SVD☆30Updated 4 months ago
- ☆147Updated 8 months ago