microsoft / CoML
Interactive coding assistant for data scientists and machine learning developers, empowered by large language models.
☆92Updated 6 months ago
Alternatives and similar repositories for CoML:
Users that are interested in CoML are comparing it to the libraries listed below
- 🤝 The code for "Can Large Language Model Agents Simulate Human Trust Behaviors?"☆77Updated 2 weeks ago
- DSBench: How Far are Data Science Agents from Becoming Data Science Experts?☆50Updated 2 months ago
- Codebase accompanying the Summary of a Haystack paper.☆77Updated 7 months ago
- Meta-CoT: Generalizable Chain-of-Thought Prompting in Mixed-task Scenarios with Large Language Models☆96Updated last year
- Benchmark and research code for the paper SWEET-RL Training Multi-Turn LLM Agents onCollaborative Reasoning Tasks☆182Updated last week
- Code for MultiAgentBench : Evaluating the Collaboration and Competition of LLM agents https://www.arxiv.org/pdf/2503.01935☆96Updated last month
- Resources for our paper: "EvoAgent: Towards Automatic Multi-Agent Generation via Evolutionary Algorithms"☆93Updated 6 months ago
- A framework for standardizing evaluations of large foundation models, beyond single-score reporting and rankings.☆124Updated this week
- ☆39Updated 2 months ago
- ☆62Updated 3 weeks ago
- The code for the paper ROUTERBENCH: A Benchmark for Multi-LLM Routing System☆117Updated 10 months ago
- augmented LLM with self reflection☆119Updated last year
- Lean implementation of various multi-agent LLM methods, including Iteration of Thought (IoT)☆108Updated 2 months ago
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆170Updated last month
- Source code for our paper: "SelfGoal: Your Language Agents Already Know How to Achieve High-level Goals".☆66Updated 9 months ago
- A banchmark list for evaluation of large language models.☆99Updated last month
- Flow of Reasoning: Training LLMs for Divergent Problem Solving with Minimal Examples☆84Updated last month
- ☆35Updated 9 months ago
- Agentic Reward Modeling: Integrating Human Preferences with Verifiable Correctness Signals for Reliable Reward Systems☆86Updated last month
- Self-Reflection in LLM Agents: Effects on Problem-Solving Performance☆69Updated 5 months ago
- [ICLR 2024] Skeleton-of-Thought: Prompting LLMs for Efficient Parallel Generation☆166Updated last year
- Public code repo for paper "SaySelf: Teaching LLMs to Express Confidence with Self-Reflective Rationales"☆104Updated 6 months ago
- Framework and toolkits for building and evaluating collaborative agents that can work together with humans.☆72Updated 2 weeks ago
- Dedicated to building industrial foundation models for universal data intelligence across industries.☆52Updated 8 months ago
- ☆47Updated 4 months ago
- 🔧 Compare how Agent systems perform on several benchmarks. 📊🚀☆95Updated 6 months ago
- ☆85Updated 2 months ago
- Systematic evaluation framework that automatically rates overthinking behavior in large language models.☆86Updated 2 weeks ago
- An Analytical Evaluation Board of Multi-turn LLM Agents [NeurIPS 2024 Oral]☆308Updated 11 months ago
- ☆40Updated 9 months ago