princeton-nlp / DinkyTrainLinks
Princeton NLP's pre-training library based on fairseq with DeepSpeed kernel integration 🚃
☆114Updated 2 years ago
Alternatives and similar repositories for DinkyTrain
Users that are interested in DinkyTrain are comparing it to the libraries listed below
Sorting:
- ☆117Updated 3 years ago
- reStructured Pre-training☆98Updated 2 years ago
- Code for paper "CrossFit : A Few-shot Learning Challenge for Cross-task Generalization in NLP" (https://arxiv.org/abs/2104.08835)☆112Updated 3 years ago
- [EMNLP 2022] Training Language Models with Memory Augmentation https://arxiv.org/abs/2205.12674☆197Updated 2 years ago
- DEMix Layers for Modular Language Modeling☆53Updated 4 years ago
- An original implementation of "Noisy Channel Language Model Prompting for Few-Shot Text Classification"☆131Updated 3 years ago
- Language modeling via stochastic processes. Oral @ ICLR 2022.☆138Updated 2 years ago
- [NeurIPS 2021] COCO-LM: Correcting and Contrasting Text Sequences for Language Model Pretraining☆117Updated 2 years ago
- Code for Editing Factual Knowledge in Language Models☆139Updated 3 years ago
- ☆79Updated 3 years ago
- Distributional Generalization in NLP. A roadmap.☆88Updated 2 years ago
- ☆53Updated last year
- [NAACL'22] TaCL: Improving BERT Pre-training with Token-aware Contrastive Learning☆93Updated 3 years ago
- The official repository for the paper "From Zero to Hero: Examining the Power of Symbolic Tasks in Instruction Tuning".☆66Updated 2 years ago
- An original implementation of "MetaICL Learning to Learn In Context" by Sewon Min, Mike Lewis, Luke Zettlemoyer and Hannaneh Hajishirzi☆270Updated 2 years ago
- FairSeq repo with Apollo optimizer☆114Updated last year
- Retrieval as Attention☆84Updated 2 years ago
- TBC☆27Updated 2 years ago
- The source code for the Cutoff data augmentation approach proposed in this paper: "A Simple but Tough-to-Beat Data Augmentation Approach …☆63Updated 4 years ago
- This project maintains a reading list for general text generation tasks☆66Updated 3 years ago
- This is the oficial repository for "Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts" (EMNLP 2022)☆102Updated 2 years ago
- [ACL 2022] Ditch the Gold Standard: Re-evaluating Conversational Question Answering☆45Updated 3 years ago
- ☆54Updated 2 years ago
- Code for paper 'Data-Efficient FineTuning'☆28Updated 2 years ago
- ☆99Updated 3 years ago
- ☆57Updated 3 years ago
- [NeurIPS'22 Spotlight] Data and code for our paper CoNT: Contrastive Neural Text Generation☆154Updated 2 years ago
- [NeurIPS 2022] Generating Training Data with Language Models: Towards Zero-Shot Language Understanding☆68Updated 2 years ago
- ☆87Updated 2 years ago
- ☆20Updated 4 years ago