deep-spin / infinite-formerLinks
☆67Updated last year
Alternatives and similar repositories for infinite-former
Users that are interested in infinite-former are comparing it to the libraries listed below
Sorting:
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023☆138Updated last year
- Recurrent Memory Transformer☆154Updated 2 years ago
- Efficient Transformers with Dynamic Token Pooling☆66Updated 2 years ago
- Randomized Positional Encodings Boost Length Generalization of Transformers☆83Updated last year
- Repo for ICML23 "Why do Nearest Neighbor Language Models Work?"☆59Updated 2 years ago
- ☆98Updated 2 years ago
- Transformers at any scale☆42Updated last year
- ☆98Updated 2 years ago
- ☆57Updated last year
- Evaluation pipeline for the BabyLM Challenge 2023.☆77Updated 2 years ago
- ☆45Updated 2 years ago
- Language modeling via stochastic processes. Oral @ ICLR 2022.☆138Updated 2 years ago
- A repository for transformer critique learning and generation☆89Updated 2 years ago
- My explorations into editing the knowledge and memories of an attention network☆35Updated 3 years ago
- [AAAI 2024] Investigating the Effectiveness of Task-Agnostic Prefix Prompt for Instruction Following☆78Updated last year
- ☆107Updated last year
- [TMLR'23] Contrastive Search Is What You Need For Neural Text Generation☆121Updated 2 years ago
- [NeurIPS 2023 spotlight] Official implementation of HGRN in our NeurIPS 2023 paper - Hierarchically Gated Recurrent Neural Network for Se…☆66Updated last year
- Source code and data for The Magic of IF: Investigating Causal Reasoning Abilities in Large Language Models of Code (Findings of ACL 2023…☆30Updated 2 years ago
- The original implementation of Min et al. "Nonparametric Masked Language Modeling" (paper https//arxiv.org/abs/2212.01349)☆158Updated 2 years ago
- This is the implementation of the paper AdaMix: Mixture-of-Adaptations for Parameter-efficient Model Tuning (https://arxiv.org/abs/2205.1…☆136Updated 2 years ago
- Sparse Backpropagation for Mixture-of-Expert Training☆29Updated last year
- ☆52Updated 2 years ago
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆38Updated 6 months ago
- SILO Language Models code repository☆83Updated last year
- ☆51Updated last year
- Retrieval as Attention☆82Updated 3 years ago
- [NeurIPS 2023] Learning Transformer Programs☆162Updated last year
- ☆76Updated last year
- The original Backpack Language Model implementation, a fork of FlashAttention☆69Updated 2 years ago