TsinghuaC3I / Intuitive-Fine-TuningLinks
[ACL 2025, Main Conference] Intuitive Fine-Tuning: Towards Simplifying Alignment into a Single Process
☆28Updated 10 months ago
Alternatives and similar repositories for Intuitive-Fine-Tuning
Users that are interested in Intuitive-Fine-Tuning are comparing it to the libraries listed below
Sorting:
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆77Updated last year
- ☆100Updated 8 months ago
- [ACL 2024 (Oral)] A Prospector of Long-Dependency Data for Large Language Models☆55Updated 10 months ago
- [ACL-25] We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLMs.☆63Updated 7 months ago
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆73Updated 2 weeks ago
- [ICML'2024] Can AI Assistants Know What They Don't Know?☆80Updated last year
- [ACL 2024] Long-Context Language Modeling with Parallel Encodings☆153Updated 11 months ago
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".☆78Updated 4 months ago
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆50Updated 5 months ago
- [ICLR 2025] 🧬 RegMix: Data Mixture as Regression for Language Model Pre-training (Spotlight)☆138Updated 3 months ago
- The code and data for the paper JiuZhang3.0☆45Updated last year
- Laser: Learn to Reason Efficiently with Adaptive Length-based Reward Shaping☆41Updated 2 weeks ago
- Official implementation of the paper "From Complex to Simple: Enhancing Multi-Constraint Complex Instruction Following Ability of Large L…☆48Updated 11 months ago
- [ICML 2024] Selecting High-Quality Data for Training Language Models☆175Updated 11 months ago
- ☆83Updated last month
- Implementation for the research paper "Enhancing LLM Reasoning via Critique Models with Test-Time and Training-Time Supervision".☆54Updated 6 months ago
- ☆18Updated 6 months ago
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆186Updated 2 months ago
- 🍼 Official implementation of Dynamic Data Mixing Maximizes Instruction Tuning for Mixture-of-Experts☆38Updated 8 months ago
- RM-R1: Unleashing the Reasoning Potential of Reward Models☆97Updated this week
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆61Updated 5 months ago
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuning☆157Updated 8 months ago
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Updated last year
- CFBench: A Comprehensive Constraints-Following Benchmark for LLMs☆34Updated 9 months ago
- xVerify: Efficient Answer Verifier for Reasoning Model Evaluations☆106Updated last month
- The source code of "Merging Experts into One: Improving Computational Efficiency of Mixture of Experts (EMNLP 2023)":☆38Updated last year
- Feeling confused about super alignment? Here is a reading list☆42Updated last year
- [ICLR'25] Data and code for our paper "Why Does the Effective Context Length of LLMs Fall Short?"☆75Updated 6 months ago
- Towards Systematic Measurement for Long Text Quality☆35Updated 9 months ago
- Code accompanying the paper "Noise Contrastive Alignment of Language Models with Explicit Rewards" (NeurIPS 2024)☆53Updated 6 months ago