jysohn1108 / Looped-TransformerLinks
Official implementation of the transformer (TF) architecture suggested in a paper entitled "Looped Transformers as Programmable Computers"
☆30Updated 2 years ago
Alternatives and similar repositories for Looped-Transformer
Users that are interested in Looped-Transformer are comparing it to the libraries listed below
Sorting:
- ☆34Updated 2 years ago
- ☆46Updated 2 years ago
- ☆53Updated last year
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Updated last year
- ☆108Updated last year
- ☆51Updated 2 years ago
- ☆91Updated last year
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆89Updated last year
- Parallelizing non-linear sequential models over the sequence length☆56Updated 7 months ago
- ☆33Updated last year
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆81Updated 2 years ago
- Self-Supervised Alignment with Mutual Information☆20Updated last year
- [NeurIPS 2024 Spotlight] Code and data for the paper "Finding Transformer Circuits with Edge Pruning".☆64Updated 5 months ago
- ☆24Updated last year
- Official code for the paper "Attention as a Hypernetwork"☆47Updated last year
- Efficient PScan implementation in PyTorch☆17Updated 2 years ago
- ☆29Updated 3 months ago
- Language models scale reliably with over-training and on downstream tasks☆99Updated last year
- Q-Probe: A Lightweight Approach to Reward Maximization for Language Models☆40Updated last year
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆83Updated last year
- Universal Neurons in GPT2 Language Models☆30Updated last year
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023☆137Updated last year
- Learning from preferences is a common paradigm for fine-tuning language models. Yet, many algorithmic design decisions come into play. Ou…☆32Updated last year
- ☆33Updated last year
- The repository contains code for Adaptive Data Optimization☆32Updated last year
- ☆20Updated 3 months ago
- Repository for Sparse Universal Transformers☆20Updated 2 years ago
- Stick-breaking attention☆62Updated 7 months ago
- Simple and efficient pytorch-native transformer training and inference (batched)☆79Updated last year
- ☆39Updated last year