jysohn1108 / Looped-Transformer
Official implementation of the transformer (TF) architecture suggested in a paper entitled "Looped Transformers as Programmable Computers"
☆24Updated 2 years ago
Alternatives and similar repositories for Looped-Transformer:
Users that are interested in Looped-Transformer are comparing it to the libraries listed below
- ☆25Updated last year
- ☆45Updated last year
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆26Updated last year
- ☆51Updated 10 months ago
- Mamba support for transformer lens☆16Updated 7 months ago
- ☆77Updated 8 months ago
- ☆23Updated 7 months ago
- ☆60Updated 11 months ago
- Stick-breaking attention☆50Updated last month
- Q-Probe: A Lightweight Approach to Reward Maximization for Language Models☆41Updated 10 months ago
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆47Updated last year
- A framework for few-shot evaluation of autoregressive language models.☆24Updated last year
- ☆31Updated 5 months ago
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆36Updated last year
- ☆53Updated 9 months ago
- Language models scale reliably with over-training and on downstream tasks☆96Updated last year
- ☆37Updated last year
- NeurIPS 2024 tutorial on LLM Inference☆41Updated 4 months ago
- Repo for: When to Make Exceptions: Exploring Language Models as Accounts of Human Moral Judgment☆38Updated last year
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆29Updated 3 weeks ago
- ☆19Updated 10 months ago
- ☆46Updated last year
- A Large-Scale, High-Quality Math Dataset for Reinforcement Learning in Language Models☆46Updated last month
- ☆82Updated last year
- This is official project in our paper: Is Bigger and Deeper Always Better? Probing LLaMA Across Scales and Layers☆30Updated last year
- [NeurIPS 2024 Spotlight] Code and data for the paper "Finding Transformer Circuits with Edge Pruning".☆50Updated last month
- Advantage Leftover Lunch Reinforcement Learning (A-LoL RL): Improving Language Models with Advantage-based Offline Policy Gradients☆26Updated 7 months ago
- Official code repo for paper "Great Memory, Shallow Reasoning: Limits of kNN-LMs"☆22Updated 7 months ago
- Efficient Scaling laws and collaborative pretraining.☆16Updated 2 months ago
- A testbed for agents and environments that can automatically improve models through data generation.☆23Updated last month