csinva / tree-promptLinks
Tree prompting: easy-to-use scikit-learn interface for improved prompting.
☆41Updated 2 years ago
Alternatives and similar repositories for tree-prompt
Users that are interested in tree-prompt are comparing it to the libraries listed below
Sorting:
- The code implementation of MAGDi: Structured Distillation of Multi-Agent Interaction Graphs Improves Reasoning in Smaller Language Models…☆40Updated 2 years ago
- Exploration of automated dataset selection approaches at large scales.☆52Updated 11 months ago
- Aligning with Human Judgement: The Role of Pairwise Preference in Large Language Model Evaluators (Liu et al.; COLM 2024)☆47Updated last year
- Unofficial Implementation of Chain-of-Thought Reasoning Without Prompting☆35Updated last year
- Codebase for Instruction Following without Instruction Tuning☆36Updated last year
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆63Updated last year
- ☆52Updated last year
- Middleware for LLMs: Tools Are Instrumental for Language Agents in Complex Environments (EMNLP'2024)☆37Updated last year
- ☆33Updated last year
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆61Updated last year
- [ICML 2025] Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examples☆120Updated last week
- Co-LLM: Learning to Decode Collaboratively with Multiple Language Models☆126Updated last year
- This is the oficial repository for "Safer-Instruct: Aligning Language Models with Automated Preference Data"☆17Updated last year
- A dataset of LLM-generated chain-of-thought steps annotated with mistake location.☆85Updated last year
- Public code repo for paper "SaySelf: Teaching LLMs to Express Confidence with Self-Reflective Rationales"☆112Updated last year
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆91Updated last year
- ReBase: Training Task Experts through Retrieval Based Distillation☆29Updated last year
- Code Implementation, Evaluations, Documentation, Links and Resources for Min P paper☆46Updated 5 months ago
- The repository contains code for Adaptive Data Optimization☆32Updated last year
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆48Updated 2 years ago
- ☆23Updated last year
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆71Updated last year
- ☆99Updated last year
- [NeurIPS'24 LanGame workshop] On The Planning Abilities of OpenAI's o1 Models: Feasibility, Optimality, and Generalizability☆42Updated 7 months ago
- Scalable Meta-Evaluation of LLMs as Evaluators☆43Updated last year
- Replicating O1 inference-time scaling laws☆93Updated last year
- [ACL 2025] Are Your LLMs Capable of Stable Reasoning?☆32Updated 6 months ago
- ☆64Updated last year
- ☆16Updated last year
- ☆140Updated last year