Haiyang-W / TokenFormerLinks
[ICLR2025 Spotlightπ₯] Official Implementation of TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parameters
β562Updated 4 months ago
Alternatives and similar repositories for TokenFormer
Users that are interested in TokenFormer are comparing it to the libraries listed below
Sorting:
- Muon: An optimizer for hidden layers in neural networksβ897Updated last week
- Official JAX implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden Statesβ411Updated 10 months ago
- Block Diffusion: Interpolating Between Autoregressive and Diffusion Language Modelsβ698Updated 2 months ago
- Helpful tools and examples for working with flex-attentionβ831Updated last week
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793β421Updated last month
- β286Updated last month
- [ICML 2024 Best Paper] Discrete Diffusion Modeling by Estimating the Ratios of the Data Distribution (https://arxiv.org/abs/2310.16834)β586Updated last year
- Implementation of the sparse attention pattern proposed by the Deepseek team in their "Native Sparse Attention" paperβ653Updated last week
- Implementation of Soft MoE, proposed by Brain's Vision team, in Pytorchβ298Updated 2 months ago
- Quick implementation of nGPT, learning entirely on the hypersphere, from NvidiaAIβ284Updated 2 weeks ago
- Implementation of π Ring Attention, from Liu et al. at Berkeley AI, in Pytorchβ519Updated last month
- [NeurIPS 2024] Simple and Effective Masked Diffusion Language Modelβ427Updated 2 weeks ago
- [ICML2024 (Oral)] Official PyTorch implementation of DoRA: Weight-Decomposed Low-Rank Adaptationβ799Updated 8 months ago
- Official PyTorch implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden Statesβ1,212Updated 11 months ago
- β567Updated 2 months ago
- When it comes to optimizers, it's always better to be safe than sorryβ241Updated 2 months ago
- Annotated version of the Mamba paperβ485Updated last year
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorchβ341Updated last year
- [ECCV 2024] Official PyTorch implementation of RoPE-ViT "Rotary Position Embedding for Vision Transformer"β335Updated 6 months ago
- [NeurIPS 2024] Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Modelsβ221Updated last month
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modelingβ881Updated last month
- code for "Diffusion Forcing: Next-token Prediction Meets Full-Sequence Diffusion"β892Updated 2 months ago
- The official implementation of Tensor ProducT ATTenTion Transformer (T6) (https://arxiv.org/abs/2501.06425)β375Updated this week
- Dream 7B, a large diffusion language modelβ764Updated last week
- SEED-Voken: A Series of Powerful Visual Tokenizersβ893Updated 4 months ago
- Implementation of Rotary Embeddings, from the Roformer paper, in Pytorchβ694Updated 6 months ago
- Some preliminary explorations of Mamba's context scaling.β214Updated last year
- β178Updated 6 months ago
- [ICLR'25 Oral] Representation Alignment for Generation: Training Diffusion Transformers Is Easier Than You Thinkβ1,126Updated 3 months ago
- Normalized Transformer (nGPT)β183Updated 7 months ago