Skytliang / Multi-Agents-Debate
MAD: The first work to explore Multi-Agent Debate with Large Language Models :D
☆345Updated 2 months ago
Alternatives and similar repositories for Multi-Agents-Debate:
Users that are interested in Multi-Agents-Debate are comparing it to the libraries listed below
- ICML 2024: Improving Factuality and Reasoning in Language Models through Multiagent Debate☆415Updated last year
- Codes for our paper "ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate"☆264Updated 5 months ago
- This is a collection of research papers for Self-Correcting Large Language Models with Automated Feedback.☆511Updated 4 months ago
- [ACL 2024] Exploring Collaboration Mechanisms for LLM Agents: A Social Psychology View☆114Updated 10 months ago
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"☆475Updated 2 months ago
- [ACL 2024] AutoAct: Automatic Agent Learning from Scratch for QA via Self-Planning☆215Updated 2 months ago
- LLMs can generate feedback on their work, use it to improve the output, and repeat this process iteratively.☆671Updated 5 months ago
- A new tool learning benchmark aiming at well-balanced stability and reality, based on ToolBench.☆135Updated 2 weeks ago
- LLM hallucination paper list☆310Updated last year
- LLM Agora, debating between open-source LLMs to refine the answers☆62Updated last year
- This is the repository for the Tool Learning survey.☆330Updated 3 weeks ago
- ToolkenGPT: Augmenting Frozen Language Models with Massive Tools via Tool Embeddings - NeurIPS 2023 (oral)☆261Updated 11 months ago
- This is the repository that contains the source code for the Self-Evaluation Guided MCTS for online DPO.☆299Updated 7 months ago
- Official Implementation of Dynamic LLM-Agent Network: An LLM-agent Collaboration Framework with Agent Team Optimization☆136Updated 10 months ago
- RewardBench: the first evaluation tool for reward models.☆526Updated 3 weeks ago
- ToolQA, a new dataset to evaluate the capabilities of LLMs in answering challenging questions with external tools. It offers two levels …☆254Updated last year
- An Analytical Evaluation Board of Multi-turn LLM Agents [NeurIPS 2024 Oral]☆293Updated 10 months ago
- Generative Judge for Evaluating Alignment☆230Updated last year
- papers related to LLM-agent that published on top conferences☆312Updated last year
- augmented LLM with self reflection☆117Updated last year
- FireAct: Toward Language Agent Fine-tuning☆271Updated last year
- This is the repository of HaluEval, a large-scale hallucination evaluation benchmark for Large Language Models.☆451Updated last year
- All available datasets for Instruction Tuning of Large Language Models☆247Updated last year
- Repository for Interleaving Retrieval with Chain-of-Thought Reasoning for Knowledge-Intensive Multi-Step Questions, ACL23☆195Updated 9 months ago
- [EMNLP 2023] Adapting Language Models to Compress Long Contexts☆296Updated 6 months ago
- ☆197Updated 11 months ago
- Official implementation of paper "Cumulative Reasoning With Large Language Models" (https://arxiv.org/abs/2308.04371)☆291Updated 6 months ago
- Data and Code for Program of Thoughts (TMLR 2023)☆263Updated 10 months ago
- A large-scale, fine-grained, diverse preference dataset (and models).☆335Updated last year
- 🌍 Repository for "AppWorld: A Controllable World of Apps and People for Benchmarking Interactive Coding Agent", ACL'24 Best Resource Pap…☆165Updated this week