SALT-NLP / collaborative-gymLinks
Framework and toolkits for building and evaluating collaborative agents that can work together with humans.
β103Updated last week
Alternatives and similar repositories for collaborative-gym
Users that are interested in collaborative-gym are comparing it to the libraries listed below
Sorting:
- [ICML 2025] Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examplesβ108Updated 3 months ago
- π AppWorld: A Controllable World of Apps and People for Benchmarking Function Calling and Interactive Coding Agent, ACL'24 Best Resourceβ¦β296Updated this week
- Benchmark and research code for the paper SWEET-RL Training Multi-Turn LLM Agents onCollaborative Reasoning Tasksβ248Updated 5 months ago
- Dynamic Cheatsheet: Test-Time Learning with Adaptive Memoryβ159Updated 5 months ago
- β128Updated last year
- Code for Paper: Autonomous Evaluation and Refinement of Digital Agents [COLM 2024]β146Updated 11 months ago
- [ACL 2025] Agentic Reward Modeling: Integrating Human Preferences with Verifiable Correctness Signals for Reliable Reward Systemsβ108Updated 4 months ago
- β116Updated 9 months ago
- augmented LLM with self reflectionβ132Updated last year
- Resources for our paper: "EvoAgent: Towards Automatic Multi-Agent Generation via Evolutionary Algorithms"β132Updated last year
- Code for the paper π³ Tree Search for Language Model Agentsβ217Updated last year
- Meta Agents Research Environments is a comprehensive platform designed to evaluate AI agents in dynamic, realistic scenarios. Unlike statβ¦β330Updated this week
- AWM: Agent Workflow Memoryβ335Updated 8 months ago
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate" [COLM 2025]β178Updated 3 months ago
- [ICML 2025] Satori: Reinforcement Learning with Chain-of-Action-Thought Enhances LLM Reasoning via Autoregressive Searchβ109Updated 4 months ago
- β219Updated 8 months ago
- β77Updated 2 months ago
- [ICLR 2025] DSBench: How Far are Data Science Agents from Becoming Data Science Experts?β78Updated 2 months ago
- [NeurIPS 2024] Spider2-V: How Far Are Multimodal Agents From Automating Data Science and Engineering Workflows?β132Updated last year
- official implementation of paper "Process Reward Model with Q-value Rankings"β64Updated 8 months ago
- β122Updated 8 months ago
- Complex Function Calling Benchmark.β143Updated 9 months ago
- Official implementation of paper "On the Diagram of Thought" (https://arxiv.org/abs/2409.10038)β187Updated 2 months ago
- A banchmark list for evaluation of large language models.β145Updated last month
- β239Updated last year
- Official repository for paper "ReasonIR Training Retrievers for Reasoning Tasks".β205Updated 4 months ago
- Benchmarking LLMs with Challenging Tasks from Real Usersβ244Updated 11 months ago
- (ACL 2025 Main) Code for MultiAgentBench : Evaluating the Collaboration and Competition of LLM agents https://www.arxiv.org/pdf/2503.019β¦β180Updated this week
- MPO: Boosting LLM Agents with Meta Plan Optimization (EMNLP 2025 Findings)β73Updated 2 months ago
- Systematic evaluation framework that automatically rates overthinking behavior in large language models.β93Updated 5 months ago