Lauorie / DFTLinks
Reproduced the DFT method without using Verl. https://arxiv.org/abs/2508.05629
☆21Updated 3 months ago
Alternatives and similar repositories for DFT
Users that are interested in DFT are comparing it to the libraries listed below
Sorting:
- [ICML 2025] |TokenSwift: Lossless Acceleration of Ultra Long Sequence Generation☆121Updated 8 months ago
- ☆29Updated last year
- ☆93Updated 8 months ago
- Agentic Learning Powered by AWorld☆86Updated last week
- Copy the MLP of llama3 8 times as 8 experts , created a router with random initialization,add load balancing loss to construct an 8x8b Mo…☆27Updated last year
- ☆254Updated last week
- ☆36Updated last year
- ☆54Updated last year
- ☆53Updated last year
- a toolkit on knowledge distillation for large language models☆266Updated this week
- SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning. COLM 2024 Accepted Paper☆32Updated last year
- [ACL 2025] An official pytorch implement of the paper: Condor: Enhance LLM Alignment with Knowledge-Driven Data Synthesis and Refinement☆40Updated 8 months ago
- ☆117Updated 8 months ago
- Scaling Preference Data Curation via Human-AI Synergy☆139Updated 7 months ago
- [ICML2025] The official implementation of "C-3PO: Compact Plug-and-Play Proxy Optimization to Achieve Human-like Retrieval-Augmented Gene…☆41Updated 9 months ago
- Our 2nd-gen LMM☆34Updated last year
- ☆59Updated 6 months ago
- Pre-trained, Scalable, High-performance Reward Models via Policy Discriminative Learning.☆164Updated 4 months ago
- FuseAI Project☆87Updated last year
- [EMNLP'25] Code for paper "MT-R1-Zero: Advancing LLM-based Machine Translation via R1-Zero-like Reinforcement Learning"☆66Updated 9 months ago
- Unleashing the Power of Cognitive Dynamics on Large Language Models☆63Updated last year
- ☆51Updated last year
- A Simple MLLM Surpassed QwenVL-Max with OpenSource Data Only in 14B LLM.☆38Updated last year
- The code and data of We-Math, accepted by ACL 2025 main conference.☆134Updated last month
- Qwen-WisdomVast is a large model trained on 1 million high-quality Chinese multi-turn SFT data, 200,000 English multi-turn SFT data, and …☆18Updated last year
- CPPO: Accelerating the Training of Group Relative Policy Optimization-Based Reasoning Models (NeurIPS 2025)☆172Updated 3 months ago
- ☆75Updated last year
- ☆96Updated last year
- ☆185Updated last year
- The source code and dataset mentioned in the paper Seal-Tools: Self-Instruct Tool Learning Dataset for Agent Tuning and Detailed Benchmar…☆53Updated last year