EleutherAI / pile_dedupeLinks
Pile Deduplication Code
☆19Updated 2 years ago
Alternatives and similar repositories for pile_dedupe
Users that are interested in pile_dedupe are comparing it to the libraries listed below
Sorting:
- Repo for the paper "Large Language Models Struggle to Learn Long-Tail Knowledge"☆77Updated 2 years ago
- Code for "Can Retriever-Augmented Language Models Reason? The Blame Game Between the Retriever and the Language Model", EMNLP Findings 20…☆28Updated last year
- Retrieval as Attention☆83Updated 2 years ago
- ☆53Updated last year
- Skill-It! A Data-Driven Skills Framework for Understanding and Training Language Models☆46Updated last year
- Code for paper 'Data-Efficient FineTuning'☆29Updated 2 years ago
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆44Updated last year
- the instructions and demonstrations for building a formal logical reasoning capable GLM☆53Updated 11 months ago
- [AAAI 2024] Investigating the Effectiveness of Task-Agnostic Prefix Prompt for Instruction Following☆79Updated 10 months ago
- Code for ACL2023 paper: Pre-Training to Learn in Context☆107Updated last year
- Language models scale reliably with over-training and on downstream tasks☆97Updated last year
- A Kernel-Based View of Language Model Fine-Tuning https://arxiv.org/abs/2210.05643☆78Updated last year
- Adding new tasks to T0 without catastrophic forgetting☆33Updated 2 years ago
- ☆39Updated last year
- ☆13Updated 3 months ago
- [NeurIPS 2023] Repetition In Repetition Out: Towards Understanding Neural Text Degeneration from the Data Perspective☆33Updated last year
- SILO Language Models code repository☆81Updated last year
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆48Updated last year
- Repository for "Propagating Knowledge Updates to LMs Through Distillation" (NeurIPS 2023).☆26Updated 11 months ago
- Code for RL4F: Generating Natural Language Feedback with Reinforcement Learning for Repairing Model Outputs. ACL 2023.☆64Updated 8 months ago
- ☆21Updated 3 months ago
- Self-Alignment with Principle-Following Reward Models☆163Updated 3 months ago
- ☆75Updated last year
- ☆65Updated last year
- [ACL'24 Oral] Analysing The Impact of Sequence Composition on Language Model Pre-Training☆22Updated 11 months ago
- Repo for ICML23 "Why do Nearest Neighbor Language Models Work?"☆58Updated 2 years ago
- ☆44Updated 8 months ago
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆57Updated last year
- PASTA: Post-hoc Attention Steering for LLMs☆122Updated 8 months ago
- [ICML 2023] Exploring the Benefits of Training Expert Language Models over Instruction Tuning☆99Updated 2 years ago