princeton-nlp / DinkyTrainLinks
Princeton NLP's pre-training library based on fairseq with DeepSpeed kernel integration 🚃
☆114Updated 2 years ago
Alternatives and similar repositories for DinkyTrain
Users that are interested in DinkyTrain are comparing it to the libraries listed below
Sorting:
- [EMNLP 2022] Training Language Models with Memory Augmentation https://arxiv.org/abs/2205.12674☆196Updated 2 years ago
- reStructured Pre-training☆98Updated 2 years ago
- ☆117Updated 3 years ago
- Code for Editing Factual Knowledge in Language Models☆141Updated 3 years ago
- Code for the ACL-2022 paper "StableMoE: Stable Routing Strategy for Mixture of Experts"☆50Updated 3 years ago
- DEMix Layers for Modular Language Modeling☆54Updated 4 years ago
- Code for paper "CrossFit : A Few-shot Learning Challenge for Cross-task Generalization in NLP" (https://arxiv.org/abs/2104.08835)☆112Updated 3 years ago
- Retrieval as Attention☆83Updated 2 years ago
- ☆54Updated last year
- An original implementation of "Noisy Channel Language Model Prompting for Few-Shot Text Classification"☆131Updated 3 years ago
- Must-read papers on improving efficiency for pre-trained language models.☆105Updated 2 years ago
- This is the oficial repository for "Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts" (EMNLP 2022)☆102Updated 2 years ago
- The Multitask Long Document Benchmark☆41Updated 2 years ago
- Language modeling via stochastic processes. Oral @ ICLR 2022.☆138Updated 2 years ago
- ☆86Updated 2 years ago
- This project maintains a reading list for general text generation tasks☆66Updated 3 years ago
- TBC☆27Updated 2 years ago
- Distributional Generalization in NLP. A roadmap.☆88Updated 2 years ago
- ☆20Updated 4 years ago
- Code for paper 'Data-Efficient FineTuning'☆28Updated 2 years ago
- Repo for the paper "Large Language Models Struggle to Learn Long-Tail Knowledge"☆78Updated 2 years ago
- Code and Models for the paper "End-to-End Training of Multi-Document Reader and Retriever for Open-Domain Question Answering" (NeurIPS 20…☆109Updated 3 years ago
- [NeurIPS 2022] Generating Training Data with Language Models: Towards Zero-Shot Language Understanding☆69Updated 3 years ago
- [NeurIPS 2021] COCO-LM: Correcting and Contrasting Text Sequences for Language Model Pretraining☆117Updated 2 years ago
- ACL'23: Unified Demonstration Retriever for In-Context Learning☆37Updated last year
- ☆36Updated last year
- ☆82Updated 2 years ago
- Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models☆142Updated 3 years ago
- [NAACL 2021] Factual Probing Is [MASK]: Learning vs. Learning to Recall https://arxiv.org/abs/2104.05240☆167Updated 3 years ago
- EMNLP'2021: Simple Entity-centric Questions Challenge Dense Retrievers https://arxiv.org/abs/2109.08535☆146Updated 3 years ago