libeineu / ODE-TransformerLinks
This is a code repository for the ACL 2022 paper "ODE Transformer: An Ordinary Differential Equation-Inspired Model for Sequence Generation", which redesigns the Transformer architecture from the ODE perspective via using high-order ODE solvers to enhance the residual connections.
☆35Updated 3 years ago
Alternatives and similar repositories for ODE-Transformer
Users that are interested in ODE-Transformer are comparing it to the libraries listed below
Sorting:
- Learning to Encode Position for Transformer with Continuous Dynamical Model☆59Updated 5 years ago
- Language modeling via stochastic processes. Oral @ ICLR 2022.☆138Updated 2 years ago
- Conditionally Adaptive Multi-Task Learning: Improving Transfer Learning in NLP Using Fewer Parameters & Less Data☆57Updated 4 years ago
- Implementation of QKVAE☆11Updated 2 years ago
- ☆29Updated 3 years ago
- A Kernel-Based View of Language Model Fine-Tuning https://arxiv.org/abs/2210.05643☆78Updated 2 years ago
- Implementation for Variational Information Bottleneck for Effective Low-resource Fine-tuning, ICLR 2021☆41Updated 4 years ago
- Official Repository for "Modeling Hierarchical Structures with Continuous Recursive Neural Networks" (ICML 2021)☆11Updated 4 years ago
- ☆51Updated 2 years ago
- Pytorch implementation of paper "Efficient Nearest Neighbor Language Models" (EMNLP 2021)☆74Updated 3 years ago
- [EMNLP 2022] Code for our paper “ZeroGen: Efficient Zero-shot Learning via Dataset Generation”.☆48Updated 3 years ago
- ☆33Updated 4 years ago
- Efficient Transformers with Dynamic Token Pooling☆64Updated 2 years ago
- ☆31Updated 2 years ago
- Implementation of ICLR 21 paper: Probing BERT in Hyperbolic Spaces☆58Updated 4 years ago
- Code for the PAPA paper☆27Updated 3 years ago
- ☆84Updated 6 years ago
- Code for Residual Energy-Based Models for Text Generation in PyTorch.☆25Updated 4 years ago
- Pytorch implementation of “Recursive Non-Autoregressive Graph-to-Graph Transformer for Dependency Parsing with Iterative Refinement”☆62Updated 4 years ago
- No Parameters Left Behind: Sensitivity Guided Adaptive Learning Rate for Training Large Transformer Models (ICLR 2022)☆29Updated 3 years ago
- A probabilitic model for contextual word representation. Accepted to ACL2023 Findings.☆25Updated 2 years ago
- The accompanying code for "Transformer Feed-Forward Layers Are Key-Value Memories". Mor Geva, Roei Schuster, Jonathan Berant, and Omer Le…☆99Updated 4 years ago
- DiffusER: Discrete Diffusion via Edit-based Reconstruction (Reid, Hellendoorn & Neubig, 2022)☆54Updated 3 months ago
- FairSeq repo with Apollo optimizer☆114Updated last year
- ☆67Updated last year
- Repository for ACL 2022 paper Mix and Match: Learning-free Controllable Text Generation using Energy Language Models☆45Updated 3 years ago
- EMNLP'2022: BERTScore is Unfair: On Social Bias in Language Model-Based Metrics for Text Generation☆41Updated 3 years ago
- code for paper "Improving Sequence-to-Sequence Learning via Optimal Transport"☆68Updated 6 years ago
- Offical code of the paper Large Language Models Are Implicitly Topic Models: Explaining and Finding Good Demonstrations for In-Context Le…☆75Updated last year
- Dispersed Exponential Family Mixture VAE☆28Updated 5 years ago