jysohn1108 / Looped-TransformerLinks
Official implementation of the transformer (TF) architecture suggested in a paper entitled "Looped Transformers as Programmable Computers"
☆27Updated 2 years ago
Alternatives and similar repositories for Looped-Transformer
Users that are interested in Looped-Transformer are comparing it to the libraries listed below
Sorting:
- ☆53Updated last year
- ☆33Updated last year
- ☆88Updated last year
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Updated last year
- ☆104Updated last year
- ☆49Updated last year
- ☆45Updated 2 years ago
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆85Updated last year
- [NeurIPS 2023] Learning Transformer Programs☆162Updated last year
- ☆33Updated last year
- Universal Neurons in GPT2 Language Models☆31Updated last year
- Official code for the paper "Attention as a Hypernetwork"☆46Updated last year
- ☆23Updated 9 months ago
- Simple and efficient pytorch-native transformer training and inference (batched)☆78Updated last year
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆80Updated 2 years ago
- ☆13Updated last year
- Stick-breaking attention☆61Updated 4 months ago
- Code and Configs for Asynchronous RLHF: Faster and More Efficient RL for Language Models☆64Updated 6 months ago
- ☆75Updated last year
- Official Code Repository for the paper "Key-value memory in the brain"☆29Updated 8 months ago
- Self-Supervised Alignment with Mutual Information☆21Updated last year
- ☆84Updated 2 years ago
- Learning from preferences is a common paradigm for fine-tuning language models. Yet, many algorithmic design decisions come into play. Ou…☆32Updated last year
- Efficient PScan implementation in PyTorch☆17Updated last year
- ☆120Updated last year
- Parallelizing non-linear sequential models over the sequence length☆55Updated 4 months ago
- Language models scale reliably with over-training and on downstream tasks☆100Updated last year
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆84Updated 11 months ago
- Mamba support for transformer lens☆18Updated last year
- ☆185Updated last year