☆327Jul 25, 2024Updated last year
Alternatives and similar repositories for AutoIF
Users that are interested in AutoIF are comparing it to the libraries listed below
Sorting:
- Official implementation of the paper "From Complex to Simple: Enhancing Multi-Constraint Complex Instruction Following Ability of Large L…☆53Jun 24, 2024Updated last year
- Benchmarking Complex Instruction-Following with Multiple Constraints Composition (NeurIPS 2024 Datasets and Benchmarks Track)☆102Feb 20, 2025Updated last year
- The code and data of DPA-RAG, accepted by WWW 2025 main conference.☆64Oct 23, 2025Updated 4 months ago
- Conifer: Improving Complex Constrained Instruction-Following Ability of Large Language Models☆89Apr 4, 2024Updated last year
- [ACL 2024] FollowBench: A Multi-level Fine-grained Constraints Following Benchmark for Large Language Models☆119Jun 12, 2025Updated 9 months ago
- [NAACL'24] Self-data filtering of LLM instruction-tuning data using a novel perplexity-based difficulty score, without using any other mo…☆416Jun 25, 2025Updated 8 months ago
- [ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data …☆833Mar 17, 2025Updated last year
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuning☆188Jun 25, 2025Updated 8 months ago
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆591Dec 9, 2024Updated last year
- InsTag: A Tool for Data Analysis in LLM Supervised Fine-tuning☆285Aug 20, 2023Updated 2 years ago
- ☆30Dec 27, 2024Updated last year
- The demo, code and data of FollowRAG☆76Jun 30, 2025Updated 8 months ago
- Suri: Multi-constraint instruction following for long-form text generation (EMNLP’24)☆27Oct 3, 2025Updated 5 months ago
- Implementations of online merging optimizers proposed by Online Merging Optimizers for Boosting Rewards and Mitigating Tax in Alignment☆82Jun 19, 2024Updated last year
- Recipes to train reward model for RLHF.☆1,521Apr 24, 2025Updated 10 months ago
- ☆46Jun 11, 2025Updated 9 months ago
- ☆35Sep 14, 2024Updated last year
- [ICML 2024] LESS: Selecting Influential Data for Targeted Instruction Tuning☆516Oct 20, 2024Updated last year
- ☆25Dec 13, 2024Updated last year
- The code and data of We-Math, accepted by ACL 2025 main conference.☆134Dec 11, 2025Updated 3 months ago
- ☆14Dec 18, 2024Updated last year
- ☆16Jul 23, 2024Updated last year
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆149Oct 27, 2024Updated last year
- A series of technical report on Slow Thinking with LLM☆761Aug 13, 2025Updated 7 months ago
- ☆36Jul 7, 2025Updated 8 months ago
- [COLM 2025] Official code for "When To Solve, When To Verify: Compute-Optimal Problem Solving and Generative Verification for LLM Reasoni…☆15Oct 31, 2025Updated 4 months ago
- Official Repo for Open-Reasoner-Zero☆2,086Jun 2, 2025Updated 9 months ago
- ☆97Nov 6, 2024Updated last year
- Reformatted Alignment☆111Sep 23, 2024Updated last year
- This is the repository that contains the source code for the Self-Evaluation Guided MCTS for online DPO.☆329Jan 29, 2026Updated last month
- ☆968Jan 23, 2025Updated last year
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆946Feb 16, 2025Updated last year
- The code and data for the paper JiuZhang3.0☆49May 26, 2024Updated last year
- Collection of papers for scalable automated alignment.☆93Oct 22, 2024Updated last year
- The implementation of paper "LLM Critics Help Catch Bugs in Mathematics: Towards a Better Mathematical Verifier with Natural Language Fee…☆38Jul 25, 2024Updated last year
- [ICLR 2025] 🧬 RegMix: Data Mixture as Regression for Language Model Pre-training (Spotlight)☆186Feb 17, 2025Updated last year
- Implementation for "Step-DPO: Step-wise Preference Optimization for Long-chain Reasoning of LLMs"☆392Jan 19, 2025Updated last year
- Official repo for the paper "Scaling Synthetic Data Creation with 1,000,000,000 Personas"☆1,497Feb 19, 2025Updated last year
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & TIS & vLLM & Ray & Async RL)☆9,191Updated this week