abhishekpanigrahi1996 / transformer_in_transformerLinks
☆45Updated last year
Alternatives and similar repositories for transformer_in_transformer
Users that are interested in transformer_in_transformer are comparing it to the libraries listed below
Sorting:
- ☆52Updated last year
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆43Updated last year
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Updated last year
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆37Updated last year
- ☆46Updated last year
- ☆85Updated last year
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆54Updated last year
- Self-Supervised Alignment with Mutual Information☆18Updated last year
- Codebase for Context-aware Meta-learned Loss Scaling (CaMeLS). https://arxiv.org/abs/2305.15076.☆25Updated last year
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆80Updated last year
- The repository contains code for Adaptive Data Optimization☆24Updated 5 months ago
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆47Updated last year
- ☆54Updated last year
- [ICLR 2025] Code for the paper "Beyond Autoregression: Discrete Diffusion for Complex Reasoning and Planning"☆59Updated 3 months ago
- Universal Neurons in GPT2 Language Models☆29Updated last year
- This repo is based on https://github.com/jiaweizzhao/GaLore☆28Updated 8 months ago
- Official code repo for paper "Great Memory, Shallow Reasoning: Limits of kNN-LMs"☆23Updated last month
- ☆20Updated last year
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023☆134Updated last year
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amount…☆53Updated last year
- ☆27Updated 9 months ago
- Online Adaptation of Language Models with a Memory of Amortized Contexts (NeurIPS 2024)☆63Updated 10 months ago
- Official implementation of the transformer (TF) architecture suggested in a paper entitled "Looped Transformers as Programmable Computers…☆27Updated 2 years ago
- ☆31Updated last year
- Exploration of automated dataset selection approaches at large scales.☆41Updated 3 months ago
- Language models scale reliably with over-training and on downstream tasks☆97Updated last year
- Code and Data Repo for the CoNLL Paper -- Future Lens: Anticipating Subsequent Tokens from a Single Hidden State☆18Updated last year
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆73Updated 7 months ago
- Repository for Sparse Universal Transformers☆18Updated last year
- [ACL'24 Oral] Analysing The Impact of Sequence Composition on Language Model Pre-Training☆21Updated 9 months ago