dinobby / MAGDiLinks
The code implementation of MAGDi: Structured Distillation of Multi-Agent Interaction Graphs Improves Reasoning in Smaller Language Models. Paper: https://arxiv.org/abs/2402.01620
☆35Updated last year
Alternatives and similar repositories for MAGDi
Users that are interested in MAGDi are comparing it to the libraries listed below
Sorting:
- ☆66Updated 3 months ago
- Middleware for LLMs: Tools Are Instrumental for Language Agents in Complex Environments (EMNLP'2024)☆37Updated 6 months ago
- Aioli: A unified optimization framework for language model data mixing☆27Updated 5 months ago
- A testbed for agents and environments that can automatically improve models through data generation.☆24Updated 4 months ago
- ☆20Updated 4 months ago
- This is the oficial repository for "Safer-Instruct: Aligning Language Models with Automated Preference Data"☆17Updated last year
- ReBase: Training Task Experts through Retrieval Based Distillation☆29Updated 5 months ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆60Updated 10 months ago
- Code and Data for "MIRAI: Evaluating LLM Agents for Event Forecasting"☆66Updated last year
- Verifiers for LLM Reinforcement Learning☆64Updated 2 months ago
- Resources for our paper: "EvoAgent: Towards Automatic Multi-Agent Generation via Evolutionary Algorithms"☆113Updated 8 months ago
- ☆24Updated 9 months ago
- Learning to Retrieve by Trying - Source code for Grounding by Trying: LLMs with Reinforcement Learning-Enhanced Retrieval☆39Updated 8 months ago
- The open source implementation of "Connecting Large Language Models with Evolutionary Algorithms Yields Powerful Prompt Optimizers"☆20Updated last year
- Understanding the correlation between different LLM benchmarks☆29Updated last year
- Are LLMs Capable of Data-based Statistical and Causal Reasoning? Benchmarking Advanced Quantitative Reasoning with Data☆41Updated 4 months ago
- Minimal implementation of the Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models paper (ArXiv 20232401.01335)☆28Updated last year
- ☆54Updated 2 weeks ago
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆48Updated last year
- SCREWS: A Modular Framework for Reasoning with Revisions☆27Updated last year
- Tree prompting: easy-to-use scikit-learn interface for improved prompting.☆37Updated last year
- This is official project in our paper: Is Bigger and Deeper Always Better? Probing LLaMA Across Scales and Layers☆30Updated last year
- [ICML 2025] Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examples☆100Updated last month
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆56Updated last year
- Codebase accompanying the Summary of a Haystack paper.☆79Updated 9 months ago
- Q-Probe: A Lightweight Approach to Reward Maximization for Language Models☆41Updated last year
- [ACL 2025] Agentic Reward Modeling: Integrating Human Preferences with Verifiable Correctness Signals for Reliable Reward Systems☆95Updated last month
- ☆19Updated 4 months ago
- ☆48Updated last month
- ☆47Updated this week