Chen-GX / C-3POLinks
[ICML2025] The official implementation of "C-3PO: Compact Plug-and-Play Proxy Optimization to Achieve Human-like Retrieval-Augmented Generation"
☆41Updated 7 months ago
Alternatives and similar repositories for C-3PO
Users that are interested in C-3PO are comparing it to the libraries listed below
Sorting:
- ☆111Updated 6 months ago
- [ACL'25] We propose a novel fine-tuning method, Separate Memory and Reasoning, which combines prompt tuning with LoRA.☆80Updated last month
- Pre-trained, Scalable, High-performance Reward Models via Policy Discriminative Learning.☆163Updated 3 months ago
- 🔧Tool-Star: Empowering LLM-brained Multi-Tool Reasoner via Reinforcement Learning☆297Updated 2 months ago
- CPPO: Accelerating the Training of Group Relative Policy Optimization-Based Reasoning Models (NeurIPS 2025)☆169Updated last month
- ☆175Updated 3 weeks ago
- ☆57Updated 5 months ago
- [ACL 2025] An official pytorch implement of the paper: Condor: Enhance LLM Alignment with Knowledge-Driven Data Synthesis and Refinement☆38Updated 7 months ago
- "what, how, where, and how well? a survey on test-time scaling in large language models" repository☆82Updated this week
- ☆39Updated 5 months ago
- Official code implementation for the ACL 2025 paper: 'CoT-based Synthesizer: Enhancing LLM Performance through Answer Synthesis'☆32Updated 7 months ago
- ☆41Updated 4 months ago
- Scaling Agentic Reinforcement Learning with a Multi-Turn, Multi-Task Framework☆159Updated last week
- [arxiv: 2505.02156] Adaptive Thinking via Mode Policy Optimization for Social Language Agents☆46Updated 5 months ago
- Scaling Preference Data Curation via Human-AI Synergy☆133Updated 5 months ago
- Adapt an LLM model to a Mixture-of-Experts model using Parameter Efficient finetuning (LoRA), injecting the LoRAs in the FFN.☆73Updated 2 months ago
- ☆45Updated 4 months ago
- ☆64Updated 4 months ago
- ☆168Updated 2 months ago
- [AAAI 2026] Official codebase for "GenPRM: Scaling Test-Time Compute of Process Reward Models via Generative Reasoning".☆92Updated last month
- Test-time preferenece optimization (ICML 2025).☆173Updated 7 months ago
- RM-R1: Unleashing the Reasoning Potential of Reward Models☆156Updated 6 months ago
- ☆34Updated 2 months ago
- ☆95Updated last year
- xVerify: Efficient Answer Verifier for Reasoning Model Evaluations☆143Updated last month
- ☆54Updated last year
- The demo, code and data of FollowRAG☆75Updated 5 months ago
- ☆87Updated 4 months ago
- ☆25Updated last year
- OpenRFT: Adapting Reasoning Foundation Model for Domain-specific Tasks with Reinforcement Fine-Tuning☆154Updated last year