IAAR-Shanghai / SEAPLinks
☆22Updated 7 months ago
Alternatives and similar repositories for SEAP
Users that are interested in SEAP are comparing it to the libraries listed below
Sorting:
- JudgeLRM: Large Reasoning Models as a Judge☆40Updated last month
- [ACL'25] We propose a novel fine-tuning method, Separate Memory and Reasoning, which combines prompt tuning with LoRA.☆82Updated 2 months ago
- ☆67Updated 4 months ago
- HelloBench: Evaluating Long Text Generation Capabilities of Large Language Models☆53Updated last year
- ☆176Updated last month
- [ACL 2025 Findings] Official implementation of the paper "Unveiling the Key Factors for Distilling Chain-of-Thought Reasoning".☆19Updated 10 months ago
- ☆33Updated last month
- (ACL 2025 oral) SCOPE: Optimizing KV Cache Compression in Long-context Generation☆33Updated 7 months ago
- R1-Searcher++: Incentivizing the Dynamic Knowledge Acquisition of LLMs via Reinforcement Learning☆69Updated 7 months ago
- ☆15Updated 7 months ago
- [NeurIPS'25 Spotlight] ARM: Adaptive Reasoning Model☆63Updated 2 months ago
- ☆21Updated last year
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆88Updated 10 months ago
- The official repo for "VisualWebInstruct: Scaling up Multimodal Instruction Data through Web Search" [EMNLP25]☆35Updated 4 months ago
- The demo, code and data of FollowRAG☆75Updated 6 months ago
- Code for paper: Long cOntext aliGnment via efficient preference Optimization☆23Updated 3 months ago
- ☆55Updated 3 months ago
- [EMNLP 2025] LightThinker: Thinking Step-by-Step Compression☆127Updated 8 months ago
- ☆35Updated 3 months ago
- ☆46Updated 3 months ago
- Source code for our paper: "ARIA: Training Language Agents with Intention-Driven Reward Aggregation".☆25Updated 5 months ago
- [arxiv: 2505.02156] Adaptive Thinking via Mode Policy Optimization for Social Language Agents☆46Updated 6 months ago
- [ICLR 2025] LongPO: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimization☆43Updated 10 months ago
- ☆23Updated 7 months ago
- xVerify: Efficient Answer Verifier for Reasoning Model Evaluations☆142Updated last month
- RAG-RewardBench: Benchmarking Reward Models in Retrieval Augmented Generation for Preference Alignment☆16Updated last year
- [ICLR 2025 Oral] "Your Mixture-of-Experts LLM Is Secretly an Embedding Model For Free"☆86Updated last year
- ☆21Updated 8 months ago
- Code for Heima☆58Updated 8 months ago
- Official Code for "Learning to Reason via Mixture-of-Thought for Logical Reasoning"☆25Updated last month