facebookresearch / SemDeDup
Code for "SemDeDup", a simple method for identifying and removing semantic duplicates from a dataset (data pairs which are semantically similar, but not exactly identical).
☆130Updated last year
Alternatives and similar repositories for SemDeDup:
Users that are interested in SemDeDup are comparing it to the libraries listed below
- [ICML 2024] Selecting High-Quality Data for Training Language Models☆158Updated 8 months ago
- DSIR large-scale data selection framework for language model training☆242Updated 11 months ago
- Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models☆139Updated 2 years ago
- Self-Alignment with Principle-Following Reward Models☆156Updated last year
- [ICLR 2025] 🧬 RegMix: Data Mixture as Regression for Language Model Pre-training (Spotlight)☆111Updated 3 weeks ago
- Code used for the creation of OBELICS, an open, massive and curated collection of interleaved image-text web documents, containing 141M d…☆197Updated 6 months ago
- [ACL 2024] Long-Context Language Modeling with Parallel Encodings☆153Updated 9 months ago
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning" [TMLR2025]☆102Updated 3 weeks ago
- ☆253Updated last year
- ☆96Updated 5 months ago
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆136Updated 4 months ago
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆130Updated 5 months ago
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuning☆144Updated 6 months ago
- ☆104Updated last year
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆76Updated last year
- Unofficial implementation of AlpaGasus☆90Updated last year
- Pytorch implementation of DoReMi, a method for optimizing the data mixture weights in language modeling datasets☆313Updated last year
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆163Updated last week
- Code for ACL2023 paper: Pre-Training to Learn in Context☆108Updated 7 months ago
- ☆73Updated 10 months ago
- Language models scale reliably with over-training and on downstream tasks☆96Updated 11 months ago
- MEND: Fast Model Editing at Scale☆242Updated last year
- All available datasets for Instruction Tuning of Large Language Models☆247Updated last year
- M4 experiment logbook☆57Updated last year
- ☆47Updated 11 months ago
- contrastive decoding☆195Updated 2 years ago
- ☆142Updated 10 months ago
- Implementation of ICML 23 Paper: Specializing Smaller Language Models towards Multi-Step Reasoning.☆129Updated last year
- MLLM-Bench: Evaluating Multimodal LLMs with Per-sample Criteria☆64Updated 4 months ago
- Reproduction of "RLCD Reinforcement Learning from Contrast Distillation for Language Model Alignment☆66Updated last year