facebookresearch / SemDeDup
Code for "SemDeDup", a simple method for identifying and removing semantic duplicates from a dataset (data pairs which are semantically similar, but not exactly identical).
☆112Updated last year
Related projects ⓘ
Alternatives and complementary repositories for SemDeDup
- [ICML 2024] Selecting High-Quality Data for Training Language Models☆141Updated 4 months ago
- DSIR large-scale data selection framework for language model training☆227Updated 7 months ago
- ☆88Updated last month
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning"☆91Updated 4 months ago
- 🧬 RegMix: Data Mixture as Regression for Language Model Pre-training☆87Updated last month
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆72Updated 8 months ago
- Reproduction of "RLCD Reinforcement Learning from Contrast Distillation for Language Model Alignment☆64Updated last year
- [ACL 2024] Long-Context Language Modeling with Parallel Encodings☆143Updated 4 months ago
- Pytorch implementation of DoReMi, a method for optimizing the data mixture weights in language modeling datasets☆304Updated 10 months ago
- Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models☆133Updated 2 years ago
- Implementation of ICML 23 Paper: Specializing Smaller Language Models towards Multi-Step Reasoning.☆125Updated last year
- All available datasets for Instruction Tuning of Large Language Models☆236Updated 11 months ago
- open-source code for paper: Retrieval Head Mechanistically Explains Long-Context Factuality☆156Updated 3 months ago
- contrastive decoding☆178Updated last year
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuning☆123Updated 2 months ago
- A Survey on Data Selection for Language Models☆178Updated 3 weeks ago
- Official PyTorch implementation of DistiLLM: Towards Streamlined Distillation for Large Language Models (ICML 2024)☆134Updated last month
- [EMNLP 2023] The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning☆212Updated last year
- Self-Alignment with Principle-Following Reward Models☆148Updated 8 months ago
- AI Logging for Interpretability and Explainability🔬☆88Updated 5 months ago
- ☆78Updated 2 years ago
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆127Updated last month
- Code used for the creation of OBELICS, an open, massive and curated collection of interleaved image-text web documents, containing 141M d…☆188Updated 2 months ago
- MEND: Fast Model Editing at Scale☆234Updated last year
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆113Updated last week
- ☆245Updated last year
- Codes and Data for Scaling Relationship on Learning Mathematical Reasoning with Large Language Models☆216Updated 2 months ago
- This is the implementation of the paper AdaMix: Mixture-of-Adaptations for Parameter-efficient Model Tuning (https://arxiv.org/abs/2205.1…☆126Updated last year
- ☆101Updated last year
- ☆107Updated 3 months ago