lechmazur / pgg_benchLinks
Public Goods Game (PGG) Benchmark: Contribute & Punish is a multi-agent benchmark that tests cooperative and self-interested strategies among Large Language Models (LLMs) in a resource-sharing economic scenario. Our experiment extends the classic PGG with a punishment phase, allowing players to penalize free-riders or retaliate against others.
☆39Updated 9 months ago
Alternatives and similar repositories for pgg_bench
Users that are interested in pgg_bench are comparing it to the libraries listed below
Sorting:
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆150Updated last week
- Very minimal (and stateless) agent framework☆44Updated last year
- LLM based agents with proactive interactions, long-term memory, external tool integration, and local deployment capabilities.☆107Updated 5 months ago
- entropix style sampling + GUI☆27Updated last year
- Easy to use, High Performant Knowledge Distillation for LLMs☆96Updated 8 months ago
- Glyphs, acting as collaboratively defined symbols linking related concepts, add a layer of multidimensional semantic richness to user-AI …☆56Updated 11 months ago
- Official homepage for "Self-Harmonized Chain of Thought" (NAACL 2025)☆91Updated 11 months ago
- ☆119Updated last year
- A lightweight script for processing HTML page to markdown format with support for code blocks☆82Updated last year
- Thematic Generalization Benchmark: measures how effectively various LLMs can infer a narrow or specific "theme" (category/rule) from a sm…☆63Updated 3 months ago
- GPT-4 Level Conversational QA Trained In a Few Hours☆66Updated last year
- A simple tool that let's you explore different possible paths that an LLM might sample.☆199Updated 8 months ago
- Distributed Inference for mlx LLm☆100Updated last year
- [EMNLP 2025] The official implementation for paper "Agentic-R1: Distilled Dual-Strategy Reasoning"☆102Updated 4 months ago
- One Line To Build Zero-Data Classifiers in Minutes☆61Updated last year
- ☆165Updated 5 months ago
- Try out HallOumi, a state-of-the-art claim verification model in a simple UI!☆41Updated 9 months ago
- ☆48Updated 11 months ago
- Simple examples using Argilla tools to build AI☆57Updated last year
- Multi-Agent Step Race Benchmark: Assessing LLM Collaboration and Deception Under Pressure. A multi-player “step-race” that challenges LLM…☆81Updated last month
- ☆24Updated 11 months ago
- ☆51Updated last year
- Hallucinations (Confabulations) Document-Based Benchmark for RAG. Includes human-verified questions and answers.☆241Updated 5 months ago
- ☆39Updated last year
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆59Updated 2 months ago
- ☆15Updated last month
- AnyModal is a Flexible Multimodal Language Model Framework for PyTorch☆103Updated last year
- A mcp server that uses the Osmosis-Apply-1.7B model to apply code merges☆53Updated 6 months ago
- ☆57Updated 11 months ago
- OpenPipe Reinforcement Learning Experiments☆32Updated 10 months ago