dinobby / MAGDiLinks
The code implementation of MAGDi: Structured Distillation of Multi-Agent Interaction Graphs Improves Reasoning in Smaller Language Models. Paper: https://arxiv.org/abs/2402.01620
☆37Updated last year
Alternatives and similar repositories for MAGDi
Users that are interested in MAGDi are comparing it to the libraries listed below
Sorting:
- ☆67Updated 6 months ago
- Middleware for LLMs: Tools Are Instrumental for Language Agents in Complex Environments (EMNLP'2024)☆37Updated 9 months ago
- [ICML 2025] Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examples☆107Updated 2 months ago
- [NeurIPS 2023] PyTorch code for Can Language Models Teach? Teacher Explanations Improve Student Performance via Theory of Mind☆66Updated last year
- ReBase: Training Task Experts through Retrieval Based Distillation☆29Updated 8 months ago
- Tree prompting: easy-to-use scikit-learn interface for improved prompting.☆40Updated last year
- Aioli: A unified optimization framework for language model data mixing☆27Updated 9 months ago
- This is the oficial repository for "Safer-Instruct: Aligning Language Models with Automated Preference Data"☆17Updated last year
- Meta-CoT: Generalizable Chain-of-Thought Prompting in Mixed-task Scenarios with Large Language Models☆99Updated 2 years ago
- ☆23Updated last year
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆48Updated last year
- Co-LLM: Learning to Decode Collaboratively with Multiple Language Models☆121Updated last year
- Understanding the correlation between different LLM benchmarks☆29Updated last year
- [ACL 2025] Agentic Reward Modeling: Integrating Human Preferences with Verifiable Correctness Signals for Reliable Reward Systems☆108Updated 4 months ago
- Verifiers for LLM Reinforcement Learning☆76Updated 6 months ago
- Scalable Meta-Evaluation of LLMs as Evaluators☆42Updated last year
- Code Implementation, Evaluations, Documentation, Links and Resources for Min P paper☆42Updated 2 months ago
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆59Updated last year
- ☆27Updated 9 months ago
- ☆86Updated last year
- Public code repo for paper "SaySelf: Teaching LLMs to Express Confidence with Self-Reflective Rationales"☆109Updated last year
- Code and Data for "MIRAI: Evaluating LLM Agents for Event Forecasting"☆78Updated last year
- This is the implementation for the paper "LARGE LANGUAGE MODEL CASCADES WITH MIX- TURE OF THOUGHT REPRESENTATIONS FOR COST- EFFICIENT REA…☆27Updated last year
- [ACL 2024] <Large Language Models for Automated Open-domain Scientific Hypotheses Discovery>. It has also received the best poster award …☆42Updated 11 months ago
- SCREWS: A Modular Framework for Reasoning with Revisions☆27Updated 2 years ago
- Q-Probe: A Lightweight Approach to Reward Maximization for Language Models☆41Updated last year
- A dataset of LLM-generated chain-of-thought steps annotated with mistake location.☆82Updated last year
- EMNLP 2024 "Re-reading improves reasoning in large language models". Simply repeating the question to get bidirectional understanding for…☆27Updated 10 months ago
- Unofficial Implementation of Chain-of-Thought Reasoning Without Prompting☆33Updated last year
- Systematic evaluation framework that automatically rates overthinking behavior in large language models.☆93Updated 5 months ago