allenai / data-efficient-finetuningLinks
Code for paper 'Data-Efficient FineTuning'
☆28Updated 2 years ago
Alternatives and similar repositories for data-efficient-finetuning
Users that are interested in data-efficient-finetuning are comparing it to the libraries listed below
Sorting:
- Repo for the paper "Large Language Models Struggle to Learn Long-Tail Knowledge"☆78Updated 2 years ago
- Retrieval as Attention☆82Updated 3 years ago
- TBC☆28Updated 3 years ago
- reStructured Pre-training☆98Updated 3 years ago
- ☆117Updated 3 years ago
- Code and data for paper "Context-faithful Prompting for Large Language Models".☆41Updated 2 years ago
- On Transferability of Prompt Tuning for Natural Language Processing☆100Updated last year
- Princeton NLP's pre-training library based on fairseq with DeepSpeed kernel integration 🚃☆114Updated 3 years ago
- Repo for "On Learning to Summarize with Large Language Models as References"☆43Updated 2 years ago
- Token-level Reference-free Hallucination Detection☆97Updated 2 years ago
- This project maintains a reading list for general text generation tasks☆66Updated 4 years ago
- ☆39Updated last year
- [ICML 2023] Exploring the Benefits of Training Expert Language Models over Instruction Tuning☆98Updated 2 years ago
- The official code of TACL 2021, "Did Aristotle Use a Laptop? A Question Answering Benchmark with Implicit Reasoning Strategies".☆81Updated 3 years ago
- ☆55Updated last year
- ☆46Updated last year
- Code for ACL2023 paper: Pre-Training to Learn in Context☆106Updated last year
- This is the official implementation of the paper: "Contrastive Learning of Sentence Embeddings from Scratch"☆40Updated 2 years ago
- Code for "Can Retriever-Augmented Language Models Reason? The Blame Game Between the Retriever and the Language Model", EMNLP Findings 20…☆28Updated 2 years ago
- The official repository for the paper "From Zero to Hero: Examining the Power of Symbolic Tasks in Instruction Tuning".☆66Updated 2 years ago
- Code for the arXiv paper: "LLMs as Factual Reasoners: Insights from Existing Benchmarks and Beyond"☆61Updated 11 months ago
- the instructions and demonstrations for building a formal logical reasoning capable GLM☆55Updated last year
- Code for Editing Factual Knowledge in Language Models☆142Updated 3 years ago
- [NeurIPS 2022] Generating Training Data with Language Models: Towards Zero-Shot Language Understanding☆70Updated 3 years ago
- The Multitask Long Document Benchmark☆42Updated 3 years ago
- ☆26Updated 3 years ago
- ☆24Updated 2 years ago
- Code associated with the paper: "Few-Shot Self-Rationalization with Natural Language Prompts"☆13Updated 3 years ago
- [EMNLP 2022] Code and data for "Controllable Dialogue Simulation with In-Context Learning"☆35Updated 2 years ago
- [EMNLP 2022] Code for our paper “ZeroGen: Efficient Zero-shot Learning via Dataset Generation”.☆48Updated 3 years ago