feiyang-k / AutoScaleLinks
Official Code Repository for [AutoScaleπ: Scale-Aware Data Mixing for Pre-Training LLMs] Published as a conference paper at **COLM 2025**
β12Updated 2 months ago
Alternatives and similar repositories for AutoScale
Users that are interested in AutoScale are comparing it to the libraries listed below
Sorting:
- β13Updated last year
- [ACL 2024] Code for the paper "ALaRM: Align Language Models via Hierarchical Rewards Modeling"β25Updated last year
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"β38Updated last year
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"β58Updated last year
- This the implementation of LeCoβ31Updated 8 months ago
- β30Updated 9 months ago
- Code for the 2025 ACL publication "Fine-Tuning on Diverse Reasoning Chains Drives Within-Inference CoT Refinement in LLMs"β33Updated 3 months ago
- Towards Systematic Measurement for Long Text Qualityβ37Updated last year
- Watch Every Step! LLM Agent Learning via Iterative Step-level Process Refinement (EMNLP 2024 Main Conference)β61Updated 11 months ago
- β15Updated last year
- β58Updated last year
- Official code for paper "SPA-RL: Reinforcing LLM Agent via Stepwise Progress Attribution"β43Updated 3 weeks ago
- The repository of the project "Fine-tuning Large Language Models with Sequential Instructions", code base comes from open-instruct and LAβ¦β29Updated 10 months ago
- RAG-RewardBench: Benchmarking Reward Models in Retrieval Augmented Generation for Preference Alignmentβ16Updated 9 months ago
- [NeurIPS'24] Weak-to-Strong Search: Align Large Language Models via Searching over Small Language Modelsβ62Updated 9 months ago
- [ICLR'24 spotlight] Tool-Augmented Reward Modelingβ51Updated 4 months ago
- Methods and evaluation for aligning language models temporallyβ30Updated last year
- [NAACL 2025] The official implementation of paper "Learning From Failure: Integrating Negative Examples when Fine-tuning Large Language Mβ¦β29Updated last year
- Codebase for Instruction Following without Instruction Tuningβ35Updated last year
- β18Updated last year
- A comrephensive collection of learning from rewards in the post-training and test-time scaling of LLMs, with a focus on both reward modelβ¦β56Updated 3 months ago
- Offical code repository for PromptMix: A Class Boundary Augmentation Method for Large Language Model Distillation, EMNLP 2023β13Updated last year
- Source codes for "Preference-grounded Token-level Guidance for Language Model Fine-tuning" (NeurIPS 2023).β16Updated 9 months ago
- [EMNLP 2025] Verification Engineering for RL in Instruction Followingβ40Updated 2 weeks ago
- Official code implementation for the ACL 2025 paper: 'CoT-based Synthesizer: Enhancing LLM Performance through Answer Synthesis'β30Updated 4 months ago
- GSM-Plus: Data, Code, and Evaluation for Enhancing Robust Mathematical Reasoning in Math Word Problems.β63Updated last year
- β50Updated 11 months ago
- [ACL-25] We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLMs.β68Updated 11 months ago
- The official repo of "WebExplorer: Explore and Evolve for Training Long-Horizon Web Agents"β74Updated last week
- The Good, The Bad, and The Greedy: Evaluation of LLMs Should Not Ignore Non-Determinismβ30Updated last year