InternLM / CondorLinks
[ACL 2025] An official pytorch implement of the paper: Condor: Enhance LLM Alignment with Knowledge-Driven Data Synthesis and Refinement
☆39Updated 8 months ago
Alternatives and similar repositories for Condor
Users that are interested in Condor are comparing it to the libraries listed below
Sorting:
- SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning. COLM 2024 Accepted Paper☆32Updated last year
- ☆54Updated last year
- ☆96Updated last year
- Unleashing the Power of Cognitive Dynamics on Large Language Models☆63Updated last year
- Self-Evolved Diverse Data Sampling for Efficient Instruction Tuning☆86Updated 2 years ago
- [ICML2025] The official implementation of "C-3PO: Compact Plug-and-Play Proxy Optimization to Achieve Human-like Retrieval-Augmented Gene…☆41Updated 9 months ago
- ☆87Updated 5 months ago
- Scaling Preference Data Curation via Human-AI Synergy☆139Updated 7 months ago
- The official implementation of "LevelRAG: Enhancing Retrieval-Augmented Generation with Multi-hop Logic Planning over Rewriting Augmented…☆49Updated 9 months ago
- ☆36Updated last year
- ☆93Updated 8 months ago
- Leveraging passage embeddings for efficient listwise reranking with large language models.☆50Updated last year
- The code for paper: Decoupled Planning and Execution: A Hierarchical Reasoning Framework for Deep Search☆63Updated 7 months ago
- ☆58Updated last year
- Adapt an LLM model to a Mixture-of-Experts model using Parameter Efficient finetuning (LoRA), injecting the LoRAs in the FFN.☆84Updated 3 months ago
- ☆104Updated last year
- Official code implementation for the ACL 2025 paper: 'CoT-based Synthesizer: Enhancing LLM Performance through Answer Synthesis'☆32Updated 8 months ago
- Code and Data for Our NeurIPS 2024 paper "AMOR: A Recipe for Building Adaptable Modular Knowledge Agents Through Process Feedback"☆34Updated last year
- ☆34Updated 3 months ago
- ☆117Updated 8 months ago
- a toolkit on knowledge distillation for large language models☆266Updated this week
- ☆51Updated last year
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆139Updated last year
- The demo, code and data of FollowRAG☆75Updated 7 months ago
- ☆111Updated 7 months ago
- Llama-3-SynE: A Significantly Enhanced Version of Llama-3 with Advanced Scientific Reasoning and Chinese Language Capabilities | 继续预训练提升 …☆37Updated 8 months ago
- ☆39Updated 6 months ago
- ☆62Updated last year
- An interactive thinking and deep reasoning model. It provides a cognitive reasoning paradigm for complex multi-hop problems.☆78Updated 2 months ago
- Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];☆40Updated 2 years ago