microsoft / eureka-ml-insightsLinks
A framework for standardizing evaluations of large foundation models, beyond single-score reporting and rankings.
☆173Updated last week
Alternatives and similar repositories for eureka-ml-insights
Users that are interested in eureka-ml-insights are comparing it to the libraries listed below
Sorting:
- Matrix (Multi-Agent daTa geneRation Infra and eXperimentation framework) is a versatile engine for multi-agent conversational data genera…☆225Updated last week
- Source code for the collaborative reasoner research project at Meta FAIR.☆110Updated 7 months ago
- A method for steering llms to better follow instructions☆64Updated 4 months ago
- Evaluating LLMs with fewer examples☆169Updated last year
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆189Updated 9 months ago
- Source code of "How to Correctly do Semantic Backpropagation on Language-based Agentic Systems" 🤖☆76Updated last year
- Official repository for "Scaling Retrieval-Based Langauge Models with a Trillion-Token Datastore".☆220Updated last month
- ☆146Updated last year
- Code for ExploreTom☆88Updated 5 months ago
- Code repo for "Agent Instructs Large Language Models to be General Zero-Shot Reasoners"☆117Updated last month
- Official Repo for CRMArena and CRMArena-Pro☆126Updated last month
- ☆79Updated 2 months ago
- ☆92Updated last week
- ☆87Updated this week
- Systematic evaluation framework that automatically rates overthinking behavior in large language models.☆94Updated 6 months ago
- Codebase accompanying the Summary of a Haystack paper.☆79Updated last year
- Code that accompanies the public release of the paper Lost in Conversation (https://arxiv.org/abs/2505.06120)☆189Updated 5 months ago
- ☆266Updated 5 months ago
- Super basic implementation (gist-like) of RLMs with REPL environments.☆278Updated last month
- QAlign is a new test-time alignment approach that improves language model performance by using Markov chain Monte Carlo methods.☆26Updated 3 weeks ago
- The code for the paper ROUTERBENCH: A Benchmark for Multi-LLM Routing System☆151Updated last year
- Analysis code for Neurips 2025 paper "SciArena: An Open Evaluation Platform for Foundation Models in Scientific Literature Tasks"☆55Updated 4 months ago
- Functional Benchmarks and the Reasoning Gap☆90Updated last year
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆173Updated 10 months ago
- code for training & evaluating Contextual Document Embedding models☆201Updated 6 months ago
- ☆43Updated last year
- ☆36Updated 3 months ago
- Accelerating your LLM training to full speed! Made with ❤️ by ServiceNow Research☆265Updated this week
- 🔧 Compare how Agent systems perform on several benchmarks. 📊🚀☆102Updated 4 months ago
- Train your own SOTA deductive reasoning model☆107Updated 9 months ago