[ICLR2025 Spotlight🔥] Official Implementation of TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parameters
☆589Feb 11, 2025Updated last year
Alternatives and similar repositories for TokenFormer
Users that are interested in TokenFormer are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Minimal implementation of TokenFormer for inference and learning☆13Nov 6, 2024Updated last year
- [ECCV2024 Oral🔥] Official Implementation of "GiT: Towards Generalist Vision Transformer through Universal Language Interface"☆362Jan 14, 2025Updated last year
- code for "Diffusion Forcing: Next-token Prediction Meets Full-Sequence Diffusion"☆1,203Nov 9, 2025Updated 5 months ago
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆375Dec 12, 2024Updated last year
- Code for BLT research paper☆2,032Nov 3, 2025Updated 5 months ago
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- Helpful tools and examples for working with flex-attention☆1,174Updated this week
- Quick implementation of nGPT, learning entirely on the hypersphere, from NvidiaAI☆293Jun 3, 2025Updated 10 months ago
- Next-Token Prediction is All You Need☆2,393Jan 12, 2026Updated 3 months ago
- 🚀 Efficient implementations for emerging model architectures☆4,878Updated this week
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆157Apr 7, 2025Updated last year
- FlexAttention w/ FlashAttention3 Support☆27Oct 5, 2024Updated last year
- [ICLR 2025 & COLM 2025] Official PyTorch implementation of the Forgetting Transformer and Adaptive Computation Pruning☆150Feb 25, 2026Updated last month
- RWKV-X is a Linear Complexity Hybrid Language Model based on the RWKV architecture, integrating Sparse Attention to improve the model's l…☆56Mar 31, 2026Updated 2 weeks ago
- A suite of image and video neural tokenizers☆1,716Feb 11, 2025Updated last year
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- ☆91Aug 18, 2024Updated last year
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793☆455May 13, 2025Updated 11 months ago
- ☆52Jun 24, 2025Updated 9 months ago
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆343Feb 23, 2025Updated last year
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆92Oct 30, 2024Updated last year
- [ICLR'25 Oral] Representation Alignment for Generation: Training Diffusion Transformers Is Easier Than You Think☆1,601Mar 16, 2025Updated last year
- Muon is an optimizer for hidden layers in neural networks☆2,479Jan 19, 2026Updated 2 months ago
- Mamba SSM architecture☆17,902Apr 7, 2026Updated last week
- PyTorch implementation of MAR+DiffLoss https://arxiv.org/abs/2406.11838☆1,889Feb 20, 2026Updated last month
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆164Apr 13, 2025Updated last year
- Pretraining and inference code for a large-scale depth-recurrent language model☆870Dec 29, 2025Updated 3 months ago
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection☆1,683Oct 28, 2024Updated last year
- Official PyTorch Implementation of "Scalable Diffusion Models with Transformers"☆8,479May 31, 2024Updated last year
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆132Dec 3, 2024Updated last year
- Repository for Meta Chameleon, a mixed-modal early-fusion foundation model from FAIR.☆2,094Jul 29, 2024Updated last year
- Code for the paper "Stack Attention: Improving the Ability of Transformers to Model Hierarchical Patterns"☆18Mar 15, 2024Updated 2 years ago
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆251Jun 6, 2025Updated 10 months ago
- Official PyTorch implementation for "Large Language Diffusion Models"☆3,713Nov 12, 2025Updated 5 months ago
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- Annotated version of the Mamba paper☆500Feb 27, 2024Updated 2 years ago
- H-Net Dynamic Hierarchical Architecture☆81Sep 11, 2025Updated 7 months ago
- Official implementation of Next Block Prediction: Video Generation via Semi-Autoregressive Modeling☆41Feb 12, 2025Updated last year
- The official repo of continuous speculative decoding☆32Mar 28, 2025Updated last year
- Official PyTorch Implementation of "SiT: Exploring Flow and Diffusion-based Generative Models with Scalable Interpolant Transformers"☆1,149Dec 22, 2025Updated 3 months ago
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"☆561Dec 28, 2024Updated last year
- ☆19Dec 4, 2025Updated 4 months ago