lechmazur / pgg_benchLinks
Public Goods Game (PGG) Benchmark: Contribute & Punish is a multi-agent benchmark that tests cooperative and self-interested strategies among Large Language Models (LLMs) in a resource-sharing economic scenario. Our experiment extends the classic PGG with a punishment phase, allowing players to penalize free-riders or retaliate against others.
☆36Updated last month
Alternatives and similar repositories for pgg_bench
Users that are interested in pgg_bench are comparing it to the libraries listed below
Sorting:
- Glyphs, acting as collaboratively defined symbols linking related concepts, add a layer of multidimensional semantic richness to user-AI …☆48Updated 3 months ago
- entropix style sampling + GUI☆26Updated 7 months ago
- A fast, local, and secure approach for training LLMs for coding tasks using GRPO with WebAssembly and interpreter feedback.☆24Updated 2 months ago
- Very minimal (and stateless) agent framework☆44Updated 4 months ago
- Data preparation code for CrystalCoder 7B LLM☆44Updated last year
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆139Updated 3 months ago
- Official homepage for "Self-Harmonized Chain of Thought" (NAACL 2025)☆90Updated 4 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆53Updated 4 months ago
- ☆12Updated last month
- ☆114Updated 5 months ago
- Official repo for Learning to Reason for Long-Form Story Generation☆60Updated last month
- ☆48Updated 3 months ago
- Synthetic data derived by templating, few shot prompting, transformations on public domain corpora, and monte carlo tree search.☆32Updated 3 months ago
- Thematic Generalization Benchmark: measures how effectively various LLMs can infer a narrow or specific "theme" (category/rule) from a sm…☆57Updated last week
- LLM based agents with proactive interactions, long-term memory, external tool integration, and local deployment capabilities.☆99Updated this week
- OpenPipe Reinforcement Learning Experiments☆24Updated 2 months ago
- Simple GRPO scripts and configurations.☆58Updated 3 months ago
- Benchmark that evaluates LLMs using 651 NYT Connections puzzles extended with extra trick words☆93Updated last week
- Score LLM pretraining data with classifiers☆55Updated last year
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆69Updated 2 weeks ago
- All the world is a play, we are but actors in it.☆50Updated this week
- A tree-based prefix cache library that allows rapid creation of looms: hierarchal branching pathways of LLM generations.☆68Updated 3 months ago
- ☆53Updated last year
- LLMs as Collaboratively Edited Knowledge Bases☆45Updated last year
- GPT-4 Level Conversational QA Trained In a Few Hours☆61Updated 9 months ago
- Using multiple LLMs for ensemble Forecasting☆16Updated last year
- A preprint version of our recent research on the capability of frontier AI systems to do self-replication☆59Updated 5 months ago
- Multi-Agent Step Race Benchmark: Assessing LLM Collaboration and Deception Under Pressure. A multi-player “step-race” that challenges LLM…☆51Updated this week
- ☆22Updated last year
- Training an LLM to use a calculator with multi-turn reinforcement learning, achieving a **62% absolute increase in evaluation accuracy**.☆38Updated 3 weeks ago