Leiay / looped_transformer
☆25Updated last year
Alternatives and similar repositories for looped_transformer:
Users that are interested in looped_transformer are comparing it to the libraries listed below
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆72Updated 6 months ago
- ☆52Updated 11 months ago
- [ICLR 2025] Monet: Mixture of Monosemantic Experts for Transformers☆66Updated 3 months ago
- ☆78Updated 8 months ago
- ☆31Updated last year
- ☆31Updated 6 months ago
- ☆54Updated last month
- Stick-breaking attention☆52Updated last month
- ☆19Updated 9 months ago
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Updated last year
- ☆47Updated last year
- Language models scale reliably with over-training and on downstream tasks☆96Updated last year
- Universal Neurons in GPT2 Language Models☆28Updated 11 months ago
- Official repository of "LiNeS: Post-training Layer Scaling Prevents Forgetting and Enhances Model Merging"☆26Updated 6 months ago
- ☆37Updated last year
- Codebase for Instruction Following without Instruction Tuning☆34Updated 7 months ago
- ☆47Updated last year
- Sparse Autoencoder Training Library☆49Updated this week
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆58Updated last month
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆51Updated 3 months ago
- Official implementation of the transformer (TF) architecture suggested in a paper entitled "Looped Transformers as Programmable Computers…☆25Updated 2 years ago
- ☆93Updated last year
- [NeurIPS'24 Spotlight] Observational Scaling Laws☆54Updated 7 months ago
- ☆31Updated 4 months ago
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amount…☆53Updated last year
- ☆84Updated last year