Leiay / looped_transformer
☆24Updated last year
Alternatives and similar repositories for looped_transformer:
Users that are interested in looped_transformer are comparing it to the libraries listed below
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆71Updated 4 months ago
- ☆73Updated 6 months ago
- ☆51Updated 9 months ago
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆25Updated 10 months ago
- Universal Neurons in GPT2 Language Models☆27Updated 9 months ago
- ☆29Updated 2 months ago
- Official implementation of the transformer (TF) architecture suggested in a paper entitled "Looped Transformers as Programmable Computers…☆24Updated last year
- The repository contains code for Adaptive Data Optimization☆20Updated 3 months ago
- Stick-breaking attention☆48Updated this week
- ☆28Updated 4 months ago
- ☆17Updated 8 months ago
- ☆52Updated 5 months ago
- [ICLR 2025] Monet: Mixture of Monosemantic Experts for Transformers☆60Updated last month
- ☆30Updated last year
- This repo is based on https://github.com/jiaweizzhao/GaLore☆26Updated 5 months ago
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amount…☆53Updated last year
- Open source replication of Anthropic's Crosscoders for Model Diffing☆44Updated 4 months ago
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆43Updated 3 weeks ago
- ☆21Updated last month
- [NeurIPS 2024 Spotlight] Code and data for the paper "Finding Transformer Circuits with Edge Pruning".☆47Updated this week
- Language models scale reliably with over-training and on downstream tasks☆96Updated 11 months ago
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆70Updated 3 months ago
- ☆44Updated last year
- ☆95Updated 8 months ago
- Official repository for our paper, Transformers Learn Higher-Order Optimization Methods for In-Context Learning: A Study with Linear Mode…☆15Updated 3 months ago
- ☆71Updated 6 months ago
- Sparse Autoencoder Training Library☆42Updated 4 months ago
- Official repository of "LiNeS: Post-training Layer Scaling Prevents Forgetting and Enhances Model Merging"☆24Updated 4 months ago