jysohn1108 / Looped-TransformerLinks
Official implementation of the transformer (TF) architecture suggested in a paper entitled "Looped Transformers as Programmable Computers"
☆29Updated 2 years ago
Alternatives and similar repositories for Looped-Transformer
Users that are interested in Looped-Transformer are comparing it to the libraries listed below
Sorting:
- ☆33Updated 2 years ago
- ☆91Updated last year
- ☆53Updated last year
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆86Updated last year
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Updated last year
- ☆23Updated 11 months ago
- Parallelizing non-linear sequential models over the sequence length☆56Updated 6 months ago
- Universal Neurons in GPT2 Language Models☆31Updated last year
- ☆45Updated 2 years ago
- ☆51Updated last year
- ☆107Updated last year
- ☆33Updated last year
- The repository contains code for Adaptive Data Optimization☆30Updated last year
- ☆20Updated last month
- Stick-breaking attention☆62Updated 5 months ago
- Hrrformer: A Neuro-symbolic Self-attention Model (ICML23)☆61Updated 2 months ago
- Code for reproducing our paper "Low Rank Adapting Models for Sparse Autoencoder Features"☆17Updated 8 months ago
- Self-Supervised Alignment with Mutual Information☆20Updated last year
- Learning from preferences is a common paradigm for fine-tuning language models. Yet, many algorithmic design decisions come into play. Ou…☆32Updated last year
- NeurIPS 2024 tutorial on LLM Inference☆47Updated last year
- Efficient PScan implementation in PyTorch☆17Updated last year
- ☆33Updated 11 months ago
- Test-time-training on nearest neighbors for large language models☆49Updated last year
- [NeurIPS 2024 Spotlight] Code and data for the paper "Finding Transformer Circuits with Edge Pruning".☆63Updated 4 months ago
- ☆13Updated last year
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆83Updated last year
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆46Updated 8 months ago
- Official code for the paper "Attention as a Hypernetwork"☆46Updated last year
- ☆75Updated last year
- ☆57Updated last year