allenai / data-efficient-finetuningLinks
Code for paper 'Data-Efficient FineTuning'
☆29Updated 2 years ago
Alternatives and similar repositories for data-efficient-finetuning
Users that are interested in data-efficient-finetuning are comparing it to the libraries listed below
Sorting:
- Retrieval as Attention☆84Updated 2 years ago
- Code for ACL2023 paper: Pre-Training to Learn in Context☆107Updated last year
- On Transferability of Prompt Tuning for Natural Language Processing☆99Updated last year
- Repo for the paper "Large Language Models Struggle to Learn Long-Tail Knowledge"☆77Updated 2 years ago
- Token-level Reference-free Hallucination Detection☆96Updated 2 years ago
- TBC☆27Updated 2 years ago
- DEMix Layers for Modular Language Modeling☆53Updated 4 years ago
- Code for the arXiv paper: "LLMs as Factual Reasoners: Insights from Existing Benchmarks and Beyond"☆59Updated 6 months ago
- ☆38Updated last year
- Adding new tasks to T0 without catastrophic forgetting☆33Updated 2 years ago
- reStructured Pre-training☆98Updated 2 years ago
- Code and data for paper "Context-faithful Prompting for Large Language Models".☆41Updated 2 years ago
- [ICML 2023] Exploring the Benefits of Training Expert Language Models over Instruction Tuning☆99Updated 2 years ago
- the instructions and demonstrations for building a formal logical reasoning capable GLM☆54Updated 11 months ago
- This repository is the official implementation of our paper MVP: Multi-task Supervised Pre-training for Natural Language Generation.☆73Updated 2 years ago
- ☆53Updated last year
- ☆117Updated 3 years ago
- The Multitask Long Document Benchmark☆41Updated 2 years ago
- The LM Contamination Index is a manually created database of contamination evidences for LMs.☆78Updated last year
- The official code of TACL 2021, "Did Aristotle Use a Laptop? A Question Answering Benchmark with Implicit Reasoning Strategies".☆77Updated 2 years ago
- Princeton NLP's pre-training library based on fairseq with DeepSpeed kernel integration 🚃☆114Updated 2 years ago
- Repo for "On Learning to Summarize with Large Language Models as References"☆43Updated 2 years ago
- [NeurIPS 2022] "A Win-win Deal: Towards Sparse and Robust Pre-trained Language Models", Yuanxin Liu, Fandong Meng, Zheng Lin, Jiangnan Li…☆21Updated last year
- Code for Editing Factual Knowledge in Language Models☆139Updated 3 years ago
- ☆35Updated 3 years ago
- [EMNLP 2022] Training Language Models with Memory Augmentation https://arxiv.org/abs/2205.12674☆197Updated 2 years ago
- A Structured Span Selector (NAACL 2022). A structured span selector with a WCFG for span selection tasks (coreference resolution, semanti…☆21Updated 3 years ago
- [NAACL 2022] Robust (Controlled) Table-to-Text Generation with Structure-Aware Equivariance Learning.☆57Updated last year
- A library for parameter-efficient and composable transfer learning for NLP with sparse fine-tunings.☆74Updated last year
- Interpreting Language Models with Contrastive Explanations (EMNLP 2022 Best Paper Honorable Mention)☆62Updated 3 years ago