chroma-core / context-rotLinks
This repository contains the toolkit for replicating results from our technical report.
☆200Updated 5 months ago
Alternatives and similar repositories for context-rot
Users that are interested in context-rot are comparing it to the libraries listed below
Sorting:
- Matrix (Multi-Agent daTa geneRation Infra and eXperimentation framework) is a versatile engine for multi-agent conversational data genera…☆261Updated last week
- Data recipes and robust infrastructure for training AI agents☆94Updated this week
- ☆238Updated 2 months ago
- A clean, modular SDK for building AI agents with OpenHands V1.☆491Updated this week
- SWE-Bench Pro: Can AI Agents Solve Long-Horizon Software Engineering Tasks?☆259Updated last month
- Real-Time Detection of Hallucinated Entities in Long-Form Generation☆278Updated 2 months ago
- Official Repo for CRMArena and CRMArena-Pro☆132Updated this week
- Public repository containing METR's DVC pipeline for eval data analysis☆199Updated last week
- Repo for "Adaptation of Agentic AI"☆592Updated 2 weeks ago
- Code that accompanies the public release of the paper Lost in Conversation (https://arxiv.org/abs/2505.06120)☆206Updated 7 months ago
- build and benchmark deep research☆229Updated last week
- A framework for standardizing evaluations of large foundation models, beyond single-score reporting and rankings.☆174Updated this week
- An alignment auditing agent capable of quickly exploring alignment hypothesis☆874Updated last week
- Collection of scripts and notebooks for OpenAI's latest GPT OSS models☆496Updated 5 months ago
- MCP-based Agent Deep Evaluation System☆144Updated 4 months ago
- Source code for the collaborative reasoner research project at Meta FAIR.☆112Updated 9 months ago
- frozen-in-time version of our Paper Finder agent for reproducing evaluation results☆225Updated 5 months ago
- Ranking LLMs on agentic tasks☆211Updated 2 months ago
- Analysis code for Neurips 2025 paper "SciArena: An Open Evaluation Platform for Foundation Models in Scientific Literature Tasks"☆56Updated 6 months ago
- Training teachers with reinforcement learning able to make LLMs learn how to reason for test time scaling.☆358Updated 7 months ago
- ☆80Updated 4 months ago
- Harbor is a framework for running agent evaluations and creating and using RL environments.☆542Updated this week
- ☆562Updated 7 months ago
- MCP-Universe is a comprehensive framework designed for developing, testing, and benchmarking AI agents☆550Updated 3 weeks ago
- ☆112Updated this week
- ☆106Updated last year
- Meta Agents Research Environments is a comprehensive platform designed to evaluate AI agents in dynamic, realistic scenarios. Unlike stat…☆427Updated 2 weeks ago
- Evolve your language agent with Agentic Context Engineering (ACE)☆576Updated 2 weeks ago
- OpenTinker is an RL-as-a-Service infrastructure for foundation models☆625Updated last week
- A Text-Based Environment for Interactive Debugging☆294Updated this week