instance-wise-ordered-transformer / IOTLinks
☆20Updated 4 years ago
Alternatives and similar repositories for IOT
Users that are interested in IOT are comparing it to the libraries listed below
Sorting:
- (ACL-IJCNLP 2021) Convolutions and Self-Attention: Re-interpreting Relative Positions in Pre-trained Language Models.☆21Updated 3 years ago
- Code for Residual Energy-Based Models for Text Generation in PyTorch.☆25Updated 4 years ago
- ☆10Updated 5 years ago
- ☆71Updated 2 years ago
- ☆50Updated 3 years ago
- Implementation of ICML 22 Paper: Scaling Structured Inference with Randomization☆14Updated 3 years ago
- lanmt ebm☆12Updated 5 years ago
- ☆28Updated 4 years ago
- Unofficial PyTorch implementation of "Step-unrolled Denoising Autoencoders for Text Generation"☆24Updated 2 years ago
- Conditionally Adaptive Multi-Task Learning: Improving Transfer Learning in NLP Using Fewer Parameters & Less Data☆57Updated 4 years ago
- [ICML 2022] Latent Diffusion Energy-Based Model for Interpretable Text Modeling☆66Updated 3 years ago
- Language modeling via stochastic processes. Oral @ ICLR 2022.☆138Updated 2 years ago
- Princeton NLP's pre-training library based on fairseq with DeepSpeed kernel integration 🚃☆114Updated 2 years ago
- 基于Transformer的单模型、多尺度的VAE模型☆57Updated 4 years ago
- PRESTO: Progressive Pretraining Enhances Synthetic Chemistry Outcomes [EMNLP 2024]☆28Updated 10 months ago
- Official code for the ICLR 2020 paper 'ARE PPE-TRAINED LANGUAGE MODELS AWARE OF PHRASES? SIMPLE BUT STRONG BASELINES FOR GRAMMAR INDCUTIO…☆30Updated 2 years ago
- ☆10Updated 3 years ago
- ☆17Updated 2 years ago
- Code for EMNLP 2020 paper CoDIR☆41Updated 3 years ago
- [EVA ICLR'23; LARA ICML'22] Efficient attention mechanisms via control variates, random features, and importance sampling☆86Updated 2 years ago
- This package implements THOR: Transformer with Stochastic Experts.☆65Updated 3 years ago
- Distributional Generalization in NLP. A roadmap.☆88Updated 2 years ago
- Must-read papers on improving efficiency for pre-trained language models.☆105Updated 2 years ago
- ☆51Updated 2 years ago
- Pre-trained Language Model for Scientific Text☆46Updated last year
- Code for the paper "Query-Key Normalization for Transformers"☆48Updated 4 years ago
- ☆14Updated last year
- PyTorch codes for the paper "An Empirical Study of Multimodal Model Merging"☆37Updated last year
- A dual learning toolkit developed by Microsoft Research☆71Updated 2 years ago
- Text Diffusion Model with Encoder-Decoder Transformers for Sequence-to-Sequence Generation [NAACL 2024]☆98Updated 2 years ago