jysohn1108 / Looped-Transformer
Official implementation of the transformer (TF) architecture suggested in a paper entitled "Looped Transformers as Programmable Computers"
☆24Updated last year
Alternatives and similar repositories for Looped-Transformer:
Users that are interested in Looped-Transformer are comparing it to the libraries listed below
- ☆28Updated 3 months ago
- Stick-breaking attention☆43Updated last month
- ☆51Updated 9 months ago
- ☆23Updated last year
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆25Updated 10 months ago
- ☆47Updated last year
- ☆18Updated 8 months ago
- ☆71Updated 6 months ago
- ☆49Updated 7 months ago
- ☆80Updated 11 months ago
- Universal Neurons in GPT2 Language Models☆27Updated 8 months ago
- ☆58Updated 9 months ago
- ☆44Updated last year
- ☆39Updated 2 years ago
- The repository contains code for Adaptive Data Optimization☆20Updated 2 months ago
- ☆27Updated 3 months ago
- Here we will test various linear attention designs.☆58Updated 9 months ago
- ☆23Updated 5 months ago
- ☆22Updated 5 months ago
- ☆30Updated 11 months ago
- Language models scale reliably with over-training and on downstream tasks☆96Updated 10 months ago
- ☆26Updated 7 months ago
- ☆17Updated 4 months ago
- [NeurIPS 2023] Sparse Modular Activation for Efficient Sequence Modeling☆35Updated last year
- ☆33Updated last year
- ☆26Updated last month
- This repository contains the code used for the experiments in the paper "Fine-Tuning Enhances Existing Mechanisms: A Case Study on Entity…☆23Updated 11 months ago
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models"☆59Updated 4 months ago
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆42Updated last year