facebookresearch / SemDeDup
Code for "SemDeDup", a simple method for identifying and removing semantic duplicates from a dataset (data pairs which are semantically similar, but not exactly identical).
ā134Updated last year
Alternatives and similar repositories for SemDeDup:
Users that are interested in SemDeDup are comparing it to the libraries listed below
- [ICML 2024] Selecting High-Quality Data for Training Language Modelsā168Updated 10 months ago
- DSIR large-scale data selection framework for language model trainingā246Updated last year
- [ICLR 2025] 𧬠RegMix: Data Mixture as Regression for Language Model Pre-training (Spotlight)ā129Updated 2 months ago
- Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-modelsā140Updated 2 years ago
- [ACL 2024] Long-Context Language Modeling with Parallel Encodingsā154Updated 10 months ago
- [EMNLP 2023] The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuningā238Updated last year
- Self-Alignment with Principle-Following Reward Modelsā160Updated last year
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Modelsā76Updated last year
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning" [TMLR2025]ā105Updated 2 months ago
- contrastive decodingā199Updated 2 years ago
- Pytorch implementation of DoReMi, a method for optimizing the data mixture weights in language modeling datasetsā321Updated last year
- ā98Updated 6 months ago
- Code for ACL2023 paper: Pre-Training to Learn in Contextā108Updated 8 months ago
- Implementation of ICML 23 Paper: Specializing Smaller Language Models towards Multi-Step Reasoning.ā130Updated last year
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]ā132Updated 7 months ago
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"ā175Updated last month
- ā104Updated last year
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuningā147Updated 7 months ago
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]ā139Updated 5 months ago
- The HELMET Benchmarkā135Updated last week
- ā49Updated last year
- ā255Updated last year
- [EMNLP 2023] Adapting Language Models to Compress Long Contextsā303Updated 7 months ago
- ā85Updated 2 years ago
- Unofficial implementation of AlpaGasusā90Updated last year
- All available datasets for Instruction Tuning of Large Language Modelsā248Updated last year
- Code used for the creation of OBELICS, an open, massive and curated collection of interleaved image-text web documents, containing 141M dā¦ā200Updated 7 months ago
- Princeton NLP's pre-training library based on fairseq with DeepSpeed kernel integration šā114Updated 2 years ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Contextā459Updated last year
- ā64Updated last year