libeineu / ODE-TransformerLinks
This is a code repository for the ACL 2022 paper "ODE Transformer: An Ordinary Differential Equation-Inspired Model for Sequence Generation", which redesigns the Transformer architecture from the ODE perspective via using high-order ODE solvers to enhance the residual connections.
☆35Updated 3 years ago
Alternatives and similar repositories for ODE-Transformer
Users that are interested in ODE-Transformer are comparing it to the libraries listed below
Sorting:
- A Kernel-Based View of Language Model Fine-Tuning https://arxiv.org/abs/2210.05643☆78Updated 2 years ago
- Language modeling via stochastic processes. Oral @ ICLR 2022.☆138Updated 2 years ago
- Learning to Encode Position for Transformer with Continuous Dynamical Model☆59Updated 5 years ago
- Offical code of the paper Large Language Models Are Implicitly Topic Models: Explaining and Finding Good Demonstrations for In-Context Le…☆75Updated last year
- Crawl & visualize ICLR papers and reviews.☆18Updated 3 years ago
- ☆31Updated 2 years ago
- ☆52Updated 3 years ago
- A probabilitic model for contextual word representation. Accepted to ACL2023 Findings.☆25Updated 2 years ago
- Implementation for Variational Information Bottleneck for Effective Low-resource Fine-tuning, ICLR 2021☆43Updated 4 years ago
- Code for the PAPA paper☆27Updated 3 years ago
- Conditionally Adaptive Multi-Task Learning: Improving Transfer Learning in NLP Using Fewer Parameters & Less Data☆57Updated 4 years ago
- [ICML 2023] Tuning Language Models as Training Data Generators for Augmentation-Enhanced Few-Shot Learning☆44Updated 2 years ago
- ☆33Updated 4 years ago
- ☆29Updated 3 years ago
- This repository is the official implementation of our EMNLP 2022 paper ELMER: A Non-Autoregressive Pre-trained Language Model for Efficie…☆26Updated 3 years ago
- FairSeq repo with Apollo optimizer☆114Updated 2 years ago
- Official Repository for "Modeling Hierarchical Structures with Continuous Recursive Neural Networks" (ICML 2021)☆11Updated 4 years ago
- Implementation of QKVAE☆11Updated 2 years ago
- [EMNLP 2022] Code for our paper “ZeroGen: Efficient Zero-shot Learning via Dataset Generation”.☆48Updated 3 years ago
- ☆36Updated last year
- ☆99Updated 2 years ago
- Code for Residual Energy-Based Models for Text Generation in PyTorch.☆26Updated 4 years ago
- Princeton NLP's pre-training library based on fairseq with DeepSpeed kernel integration 🚃☆115Updated 3 years ago
- [NeurIPS 2022] Your Transformer May Not be as Powerful as You Expect (official implementation)☆34Updated 2 years ago
- [NeurIPS 2022]MorphTE: Injecting Morphology in Tensorized Embeddings☆17Updated 3 years ago
- [ACL 2023] The code for our ACL'23 paper Cold-Start Data Selection for Few-shot Language Model Fine-tuning: A Prompt-Based Uncertainty Pr…☆24Updated last year
- This package implements THOR: Transformer with Stochastic Experts.☆65Updated 4 years ago
- [ACL 2023 Findings] What In-Context Learning “Learns” In-Context: Disentangling Task Recognition and Task Learning☆21Updated 2 years ago
- ☆46Updated 4 years ago
- Methods and evaluation for aligning language models temporally☆30Updated last year