EleutherAI / pile_dedupe
Pile Deduplication Code
☆16Updated last year
Alternatives and similar repositories for pile_dedupe:
Users that are interested in pile_dedupe are comparing it to the libraries listed below
- [NeurIPS 2023] Repetition In Repetition Out: Towards Understanding Neural Text Degeneration from the Data Perspective☆30Updated last year
- ☆23Updated last year
- ☆47Updated 9 months ago
- [ACL'24 Oral] Analysing The Impact of Sequence Composition on Language Model Pre-Training☆18Updated 5 months ago
- Provides a minimal implementation to extract FLAN datasets for further processing☆11Updated last year
- ☆16Updated 10 months ago
- Repo for the paper "Large Language Models Struggle to Learn Long-Tail Knowledge"☆74Updated last year
- Code for paper 'Data-Efficient FineTuning'☆29Updated last year
- DEMix Layers for Modular Language Modeling☆53Updated 3 years ago
- Code for M4LE: A Multi-Ability Multi-Range Multi-Task Multi-Domain Long-Context Evaluation Benchmark for Large Language Models☆22Updated 5 months ago
- Technical Report: Is ChatGPT a Good NLG Evaluator? A Preliminary Study☆43Updated last year
- ☆33Updated 3 years ago
- An Empirical Study On Contrastive Search And Contrastive Decoding For Open-ended Text Generation☆26Updated 7 months ago
- TBC☆26Updated 2 years ago
- ☆55Updated 2 years ago
- Code for the arXiv paper: "LLMs as Factual Reasoners: Insights from Existing Benchmarks and Beyond"☆59Updated 9 months ago
- Retrieval as Attention☆83Updated 2 years ago
- Code and data for paper "Context-faithful Prompting for Large Language Models".☆39Updated last year
- Repo for ICML23 "Why do Nearest Neighbor Language Models Work?"☆56Updated 2 years ago
- Towards Systematic Measurement for Long Text Quality☆31Updated 4 months ago
- This repository is the official implementation of our EMNLP 2022 paper ELMER: A Non-Autoregressive Pre-trained Language Model for Efficie…☆27Updated 2 years ago
- Momentum Decoding: Open-ended Text Generation as Graph Exploration☆19Updated last year
- Task Compass: Scaling Multi-task Pre-training with Task Prefix (EMNLP 2022: Findings) (stay tuned & more will be updated)☆22Updated 2 years ago
- ☆27Updated 10 months ago
- ☆43Updated 3 years ago
- A Kernel-Based View of Language Model Fine-Tuning https://arxiv.org/abs/2210.05643☆74Updated last year
- "FiD-ICL: A Fusion-in-Decoder Approach for Efficient In-Context Learning" (ACL 2023)☆13Updated last year
- A unified benchmark for math reasoning☆87Updated last year
- The official repository for the paper "From Zero to Hero: Examining the Power of Symbolic Tasks in Instruction Tuning".☆63Updated last year
- ☆36Updated 5 months ago