allenai / data-efficient-finetuningLinks
Code for paper 'Data-Efficient FineTuning'
☆28Updated 2 years ago
Alternatives and similar repositories for data-efficient-finetuning
Users that are interested in data-efficient-finetuning are comparing it to the libraries listed below
Sorting:
- Retrieval as Attention☆82Updated 3 years ago
- ☆116Updated 3 years ago
- Repo for the paper "Large Language Models Struggle to Learn Long-Tail Knowledge"☆78Updated 2 years ago
- TBC☆28Updated 3 years ago
- The official code of TACL 2021, "Did Aristotle Use a Laptop? A Question Answering Benchmark with Implicit Reasoning Strategies".☆83Updated 3 years ago
- Princeton NLP's pre-training library based on fairseq with DeepSpeed kernel integration 🚃☆116Updated 3 years ago
- On Transferability of Prompt Tuning for Natural Language Processing☆100Updated last year
- [ICML 2023] Exploring the Benefits of Training Expert Language Models over Instruction Tuning☆98Updated 2 years ago
- Token-level Reference-free Hallucination Detection☆98Updated 2 years ago
- Code for Editing Factual Knowledge in Language Models☆142Updated 4 years ago
- Code for ACL2023 paper: Pre-Training to Learn in Context☆106Updated last year
- Repo for "On Learning to Summarize with Large Language Models as References"☆43Updated 2 years ago
- Code and data for paper "Context-faithful Prompting for Large Language Models".☆42Updated 2 years ago
- Interpretable unified language safety checking with large language models☆31Updated 2 years ago
- The Multitask Long Document Benchmark☆42Updated 3 years ago
- Interpreting Language Models with Contrastive Explanations (EMNLP 2022 Best Paper Honorable Mention)☆62Updated 3 years ago
- [NeurIPS 2022] Generating Training Data with Language Models: Towards Zero-Shot Language Understanding☆70Updated 3 years ago
- reStructured Pre-training☆99Updated 3 years ago
- [NAACL 2022] Robust (Controlled) Table-to-Text Generation with Structure-Aware Equivariance Learning.☆57Updated last year
- The LM Contamination Index is a manually created database of contamination evidences for LMs.☆82Updated last year
- [EMNLP 2022] Training Language Models with Memory Augmentation https://arxiv.org/abs/2205.12674☆195Updated 2 years ago
- 🦮 Code and pretrained models for Findings of ACL 2022 paper "LaPraDoR: Unsupervised Pretrained Dense Retriever for Zero-Shot Text Retrie…☆49Updated 3 years ago
- DEMix Layers for Modular Language Modeling☆54Updated 4 years ago
- [AAAI 2024] Investigating the Effectiveness of Task-Agnostic Prefix Prompt for Instruction Following☆78Updated last year
- This repository contains the code for "How many data points is a prompt worth?"☆48Updated 4 years ago
- Detect hallucinated tokens for conditional sequence generation.☆64Updated 3 years ago
- ☆55Updated last year
- This project maintains a reading list for general text generation tasks☆66Updated 4 years ago
- This repository accompanies our paper “Do Prompt-Based Models Really Understand the Meaning of Their Prompts?”☆85Updated 3 years ago
- Code for the paper Code for the paper InstructDial: Improving Zero and Few-shot Generalization in Dialogue through Instruction Tuning☆100Updated 2 years ago