DataCTE / Camel-CoderLinks
Camel-Coder: Collaborative task completion with multiple agents. Role-based prompts, intervention mechanism, and thoughtful suggestions
☆33Updated 2 years ago
Alternatives and similar repositories for Camel-Coder
Users that are interested in Camel-Coder are comparing it to the libraries listed below
Sorting:
- Mixing Language Models with Self-Verification and Meta-Verification☆110Updated 9 months ago
- A set of utilities for running few-shot prompting experiments on large-language models☆123Updated last year
- A codebase for "Language Models can Solve Computer Tasks"☆236Updated last year
- EcoAssistant: using LLM assistant more affordably and accurately☆133Updated last year
- Just a bunch of benchmark logs for different LLMs☆119Updated last year
- ☆73Updated 2 years ago
- Problem solving by engaging multiple AI agents in conversation with each other and the user.☆226Updated last year
- ☆86Updated last year
- Build Hierarchical Autonomous Agents through Config. Collaborative Growth of Specialized Agents.☆322Updated last year
- Evaluating LLMs with CommonGen-Lite☆91Updated last year
- ☆20Updated 2 years ago
- A library for benchmarking the Long Term Memory and Continual learning capabilities of LLM based agents. With all the tests and code you…☆79Updated 9 months ago
- Functional Benchmarks and the Reasoning Gap☆89Updated last year
- Scripts for generating synthetic finetuning data for reducing sycophancy.☆116Updated 2 years ago
- Track the progress of LLM context utilisation☆54Updated 5 months ago
- ☆128Updated last year
- Meta-CoT: Generalizable Chain-of-Thought Prompting in Mixed-task Scenarios with Large Language Models☆99Updated last year
- ☆134Updated last year
- Gentopia Agent Zoo and Agent Benchmark☆30Updated 2 years ago
- Steer LLM outputs towards a certain topic/subject and enhance response capabilities using activation engineering by adding steering vecto…☆246Updated 7 months ago
- Code for Arxiv 2023: Improving Language Model Negociation with Self-Play and In-Context Learning from AI Feedback☆206Updated 2 years ago
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆224Updated 3 weeks ago
- [NeurIPS 2023] PyTorch code for Can Language Models Teach? Teacher Explanations Improve Student Performance via Theory of Mind☆66Updated last year
- Code repo for "Agent Instructs Large Language Models to be General Zero-Shot Reasoners"☆115Updated last year
- TART: A plug-and-play Transformer module for task-agnostic reasoning☆201Updated 2 years ago
- Collection of Tree of Thoughts prompting techniques I've found useful to start with, then stylize, then iterate☆92Updated last year
- My implementation of "Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models"☆99Updated last year
- ☆186Updated 8 months ago
- Notus is a collection of fine-tuned LLMs using SFT, DPO, SFT+DPO, and/or any other RLHF techniques, while always keeping a data-first app…☆169Updated last year
- Official Implementation of InstructZero; the first framework to optimize bad prompts of ChatGPT(API LLMs) and finally obtain good prompts…☆197Updated last year