xiatingyu / SFT-DataSelection-at-scaleLinks
β30Updated 5 months ago
Alternatives and similar repositories for SFT-DataSelection-at-scale
Users that are interested in SFT-DataSelection-at-scale are comparing it to the libraries listed below
Sorting:
- [ACL 2024] The official codebase for the paper "Self-Distillation Bridges Distribution Gap in Language Model Fine-tuning".β124Updated 8 months ago
- [ICLR 2025] 𧬠RegMix: Data Mixture as Regression for Language Model Pre-training (Spotlight)β151Updated 4 months ago
- Model merging is a highly efficient approach for long-to-short reasoning.β71Updated last month
- Repo for the EMNLP'24 Paper "Dual-Space Knowledge Distillation for Large Language Models". A general white-box KD framework for both sameβ¦β56Updated 8 months ago
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuningβ162Updated 3 weeks ago
- [ACL-25] We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLMs.β63Updated 8 months ago
- β102Updated 9 months ago
- Code implementation of synthetic continued pretrainingβ117Updated 6 months ago
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".β80Updated 6 months ago
- β15Updated 7 months ago
- [ACL 2024] Long-Context Language Modeling with Parallel Encodingsβ155Updated last year
- [ICML 2024] Selecting High-Quality Data for Training Language Modelsβ178Updated last year
- Code for ACL 2024 accepted paper titled "SAPT: A Shared Attention Framework for Parameter-Efficient Continual Learning of Large Language β¦β35Updated 6 months ago
- Official code implementation for the ACL 2025 paper: 'CoT-based Synthesizer: Enhancing LLM Performance through Answer Synthesis'β27Updated last month
- [NeurIPS'24] Weak-to-Strong Search: Align Large Language Models via Searching over Small Language Modelsβ62Updated 7 months ago
- [ICML'2024] Can AI Assistants Know What They Don't Know?β81Updated last year
- β205Updated 4 months ago
- β122Updated last month
- β65Updated 3 months ago
- β143Updated 11 months ago
- Codes for Merging Large Language Modelsβ32Updated 11 months ago
- [NeurIPS 2024] Official code of $\beta$-DPO: Direct Preference Optimization with Dynamic $\beta$β45Updated 8 months ago
- TokenSkip: Controllable Chain-of-Thought Compression in LLMsβ164Updated 2 weeks ago
- β84Updated last year
- [ACL 2025, Main Conference, Oral] Intuitive Fine-Tuning: Towards Simplifying Alignment into a Single Processβ28Updated 11 months ago
- A Sober Look at Language Model Reasoningβ75Updated 3 weeks ago
- This is an official implementation of the Reward rAnked Fine-Tuning Algorithm (RAFT), also known as iterative best-of-n fine-tuning or reβ¦β33Updated 9 months ago
- [NeurIPS 2023] Github repository for "Composing Parameter-Efficient Modules with Arithmetic Operations"β61Updated last year
- Implementation for the paper "The Surprising Effectiveness of Negative Reinforcement in LLM Reasoning"β74Updated this week
- [ICLR 25 Oral] RM-Bench: Benchmarking Reward Models of Language Models with Subtlety and Styleβ55Updated 2 weeks ago