microsoft / lost_in_conversationLinks
Code that accompanies the public release of the paper Lost in Conversation (https://arxiv.org/abs/2505.06120)
☆172Updated 3 months ago
Alternatives and similar repositories for lost_in_conversation
Users that are interested in lost_in_conversation are comparing it to the libraries listed below
Sorting:
- Complex Function Calling Benchmark.☆135Updated 8 months ago
- Official repository for paper "ReasonIR Training Retrievers for Reasoning Tasks".☆203Updated 3 months ago
- Systematic evaluation framework that automatically rates overthinking behavior in large language models.☆93Updated 4 months ago
- Code for the EMNLP 2024 paper "Detecting and Mitigating Contextual Hallucinations in Large Language Models Using Only Attention Maps"☆131Updated last year
- [ACL 2025] Agentic Reward Modeling: Integrating Human Preferences with Verifiable Correctness Signals for Reliable Reward Systems☆106Updated 3 months ago
- ☆99Updated 11 months ago
- ☆78Updated 2 weeks ago
- Framework and toolkits for building and evaluating collaborative agents that can work together with humans.☆99Updated last week
- Official Code Repository for the paper "Distilling LLM Agent into Small Models with Retrieval and Code Tools"☆156Updated 2 months ago
- LongEmbed: Extending Embedding Models for Long Context Retrieval (EMNLP 2024)☆143Updated 11 months ago
- [EMNLP 2025] The official implementation for paper "Agentic-R1: Distilled Dual-Strategy Reasoning"☆100Updated last month
- ☆90Updated 4 months ago
- ☆80Updated this week
- Verifiers for LLM Reinforcement Learning☆74Updated 5 months ago
- Data Synthesis for Deep Research Based on Semi-Structured Data☆165Updated 2 weeks ago
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate" [COLM 2025]☆173Updated 3 months ago
- Evaluating LLMs with fewer examples☆161Updated last year
- ☆143Updated 6 months ago
- Benchmarking LLMs with Challenging Tasks from Real Users☆241Updated 11 months ago
- Meta Agents Research Environments is a comprehensive platform designed to evaluate AI agents in dynamic, realistic scenarios. Unlike stat…☆282Updated 2 weeks ago
- The official evaluation suite and dynamic data release for MixEval.☆249Updated 11 months ago
- A Large-Scale, Challenging, Decontaminated, and Verifiable Mathematical Dataset for Advancing Reasoning☆254Updated 2 weeks ago
- 🚢 Data Toolkit for Sailor Language Models☆94Updated 7 months ago
- Benchmark and research code for the paper SWEET-RL Training Multi-Turn LLM Agents onCollaborative Reasoning Tasks☆245Updated 5 months ago
- ☆98Updated last month
- (ACL 2025 Main) Code for MultiAgentBench : Evaluating the Collaboration and Competition of LLM agents https://www.arxiv.org/pdf/2503.019…☆169Updated this week
- BrowseComp-Plus: A More Fair and Transparent Evaluation Benchmark of Deep-Research Agent☆84Updated last week
- General Reasoner: Advancing LLM Reasoning Across All Domains [NeurIPS25]☆174Updated 4 months ago
- Efficient Agent Training for Computer Use☆132Updated last month
- SSRL: Self-Search Reinforcement Learning☆145Updated last month