microsoft / EfficientLongSequenceModeling
☆51Updated 2 years ago
Alternatives and similar repositories for EfficientLongSequenceModeling:
Users that are interested in EfficientLongSequenceModeling are comparing it to the libraries listed below
- [NeurIPS 2023 spotlight] Official implementation of HGRN in our NeurIPS 2023 paper - Hierarchically Gated Recurrent Neural Network for Se…☆62Updated 9 months ago
- DiffusER: Discrete Diffusion via Edit-based Reconstruction (Reid, Hellendoorn & Neubig, 2022)☆54Updated last year
- The official repository for our paper "The Neural Data Router: Adaptive Control Flow in Transformers Improves Systematic Generalization".☆32Updated 3 years ago
- ☆49Updated 7 months ago
- ☆22Updated 3 years ago
- ☆32Updated 3 years ago
- [NeurIPS 2022] Your Transformer May Not be as Powerful as You Expect (official implementation)☆34Updated last year
- ☆18Updated 8 months ago
- No Parameters Left Behind: Sensitivity Guided Adaptive Learning Rate for Training Large Transformer Models (ICLR 2022)☆30Updated 3 years ago
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆48Updated last year
- ☆30Updated last year
- ☆44Updated last year
- ☆33Updated last year
- [ICLR 2022] Pretraining Text Encoders with Adversarial Mixture of Training Signal Generators☆24Updated last year
- ☆47Updated last year
- A Kernel-Based View of Language Model Fine-Tuning https://arxiv.org/abs/2210.05643☆74Updated last year
- ☆20Updated 2 years ago
- Language modeling via stochastic processes. Oral @ ICLR 2022.☆137Updated last year
- Follow the Wisdom of the Crowd: Effective Text Generation via Minimum Bayes Risk Decoding☆18Updated 2 years ago
- Efficient Transformers with Dynamic Token Pooling☆58Updated last year
- Code for paper "Diffusion Language Models Can Perform Many Tasks with Scaling and Instruction-Finetuning"☆65Updated last year
- A probabilitic model for contextual word representation. Accepted to ACL2023 Findings.☆23Updated last year
- Implementation of Gated State Spaces, from the paper "Long Range Language Modeling via Gated State Spaces", in Pytorch☆97Updated last year
- ☆13Updated 2 years ago
- ☆22Updated 2 years ago
- A variant of Transformer-XL where the memory is updated not with a queue, but with attention☆47Updated 4 years ago
- PyTorch code for the RetoMaton paper: "Neuro-Symbolic Language Modeling with Automaton-augmented Retrieval" (ICML 2022)☆71Updated 2 years ago
- ☆28Updated 3 months ago
- ☆43Updated 4 years ago