google-research / adapter-bertLinks
☆501Updated 2 years ago
Alternatives and similar repositories for adapter-bert
Users that are interested in adapter-bert are comparing it to the libraries listed below
Sorting:
- [ACL 2021] LM-BFF: Better Few-shot Fine-tuning of Language Models https://arxiv.org/abs/2012.15723☆729Updated 3 years ago
- ☆399Updated 4 years ago
- This is a repository with the code for the ACL 2019 paper "Analyzing Multi-Head Self-Attention: Specialized Heads Do the Heavy Lifting, t…☆317Updated 4 years ago
- ☆290Updated 3 years ago
- Code associated with the Don't Stop Pretraining ACL 2020 paper☆538Updated 4 years ago
- MPNet: Masked and Permuted Pre-training for Language Understanding https://arxiv.org/pdf/2004.09297.pdf☆296Updated 4 years ago
- A research project for natural language generation, containing the official implementations by MSRA NLC team.☆739Updated last year
- Code for the NAACL 2022 long paper "DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings"☆297Updated 3 years ago
- [NeurIPS'22 Spotlight] A Contrastive Framework for Neural Text Generation☆476Updated last year
- Optimus: the first large-scale pre-trained VAE language model☆391Updated 2 years ago
- Interpretable Evaluation for AI Systems☆365Updated 2 years ago
- For the code release of our arXiv paper "Revisiting Few-sample BERT Fine-tuning" (https://arxiv.org/abs/2006.05987).☆184Updated 2 years ago
- ICML'2022: NLP From Scratch Without Large-Scale Pretraining: A Simple and Efficient Framework☆255Updated last year
- A novel embedding training algorithm leveraging ANN search and achieved SOTA retrieval on Trec DL 2019 and OpenQA benchmarks☆380Updated 2 years ago
- Prefix-Tuning: Optimizing Continuous Prompts for Generation☆959Updated last year
- ☆351Updated 4 years ago
- ☆163Updated 5 months ago
- Transformer with Untied Positional Encoding (TUPE). Code of paper "Rethinking Positional Encoding in Language Pre-training". Improve exis…