Haiyang-W / TokenFormer
[ICLR2025 Spotlightπ₯] Official Implementation of TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parameters
β555Updated 2 months ago
Alternatives and similar repositories for TokenFormer:
Users that are interested in TokenFormer are comparing it to the libraries listed below
- Muon optimizer: +>30% sample efficiency with <3% wallclock overheadβ597Updated last month
- Implementation of the sparse attention pattern proposed by the Deepseek team in their "Native Sparse Attention" paperβ607Updated last month
- β280Updated 2 weeks ago
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793β409Updated 3 weeks ago
- Helpful tools and examples for working with flex-attentionβ757Updated this week
- Official JAX implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden Statesβ409Updated 8 months ago
- β528Updated 3 weeks ago
- Block Diffusion: Interpolating Between Autoregressive and Diffusion Language Modelsβ569Updated 3 weeks ago
- [ICML2024 (Oral)] Official PyTorch implementation of DoRA: Weight-Decomposed Low-Rank Adaptationβ775Updated 7 months ago
- [ICML 2024 Best Paper] Discrete Diffusion Modeling by Estimating the Ratios of the Data Distribution (https://arxiv.org/abs/2310.16834)β565Updated last year
- The official implementation of Tensor ProducT ATTenTion Transformer (T6)β367Updated 3 weeks ago
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorchβ328Updated 10 months ago
- Simple and Effective Masked Diffusion Language Modelβ376Updated 3 weeks ago
- [NeurIPS 2024] Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Modelsβ215Updated this week
- Implementation of π Ring Attention, from Liu et al. at Berkeley AI, in Pytorchβ511Updated 6 months ago
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modelingβ867Updated last week
- Quick implementation of nGPT, learning entirely on the hypersphere, from NvidiaAIβ281Updated last month
- Official PyTorch implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden Statesβ1,182Updated 9 months ago
- Annotated version of the Mamba paperβ483Updated last year
- Implementation of Soft MoE, proposed by Brain's Vision team, in Pytorchβ287Updated last month
- When it comes to optimizers, it's always better to be safe than sorryβ222Updated last month
- [ECCV 2024] Official PyTorch implementation of RoPE-ViT "Rotary Position Embedding for Vision Transformer"β320Updated 4 months ago
- When do we not need larger vision models?β391Updated 2 months ago
- π³ Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"β647Updated last month
- Pytorch implementation of Transfusion, "Predict the Next Token and Diffuse Images with One Multi-Modal Model", from MetaAIβ1,087Updated last month
- β176Updated 4 months ago
- PyTorch implementation of Infini-Transformer from "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attentionβ¦β289Updated last year
- Pretraining code for a large-scale depth-recurrent language modelβ755Updated 3 weeks ago
- Some preliminary explorations of Mamba's context scaling.β213Updated last year
- Implementation of Autoregressive Diffusion in Pytorchβ376Updated 6 months ago