deep-spin / infinite-formerLinks
☆67Updated last year
Alternatives and similar repositories for infinite-former
Users that are interested in infinite-former are comparing it to the libraries listed below
Sorting:
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023☆136Updated last year
- Recurrent Memory Transformer☆150Updated 2 years ago
- ☆45Updated last year
- ☆56Updated last year
- Randomized Positional Encodings Boost Length Generalization of Transformers☆82Updated last year
- Transformers at any scale☆41Updated last year
- Efficient Transformers with Dynamic Token Pooling☆64Updated 2 years ago
- Repo for ICML23 "Why do Nearest Neighbor Language Models Work?"☆59Updated 2 years ago
- ☆98Updated 2 years ago
- The official repository for our paper "The Neural Data Router: Adaptive Control Flow in Transformers Improves Systematic Generalization".☆33Updated 3 months ago
- ☆11Updated last year
- Evaluation pipeline for the BabyLM Challenge 2023.☆77Updated last year
- ☆49Updated last year
- My explorations into editing the knowledge and memories of an attention network☆35Updated 2 years ago
- ☆19Updated 2 years ago
- ☆98Updated 2 years ago
- ☆51Updated 2 years ago
- [AAAI 2024] Investigating the Effectiveness of Task-Agnostic Prefix Prompt for Instruction Following☆78Updated last year
- ☆31Updated 2 years ago
- [NeurIPS 2023 spotlight] Official implementation of HGRN in our NeurIPS 2023 paper - Hierarchically Gated Recurrent Neural Network for Se…☆66Updated last year
- [TMLR'23] Contrastive Search Is What You Need For Neural Text Generation☆121Updated 2 years ago
- [ICLR 2023] Official implementation of Transnormer in our ICLR 2023 paper - Toeplitz Neural Network for Sequence Modeling☆80Updated last year
- Official Repository of Pretraining Without Attention (BiGS), BiGS is the first model to achieve BERT-level transfer learning on the GLUE …☆115Updated last year
- Sparse Backpropagation for Mixture-of-Expert Training☆30Updated last year
- Retrieval as Attention☆83Updated 2 years ago
- 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.☆81Updated 3 years ago
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆56Updated last week
- The original Backpack Language Model implementation, a fork of FlashAttention☆69Updated 2 years ago
- Official code and model checkpoints for our EMNLP 2022 paper "RankGen - Improving Text Generation with Large Ranking Models" (https://arx…☆138Updated 2 years ago
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆38Updated 3 months ago