lechmazur / step_gameLinks
Multi-Agent Step Race Benchmark: Assessing LLM Collaboration and Deception Under Pressure. A multi-player “step-race” that challenges LLMs to engage in public conversation before secretly picking a move (1, 3, or 5 steps). Whenever two or more players choose the same number, all colliding players fail to advance.
☆85Updated 2 months ago
Alternatives and similar repositories for step_game
Users that are interested in step_game are comparing it to the libraries listed below
Sorting:
- klmbr - a prompt pre-processing technique to break through the barrier of entropy while generating text with LLMs☆86Updated last year
- Benchmark evaluating LLMs on their ability to create and resist disinformation. Includes comprehensive testing across major models (Claud…☆31Updated 10 months ago
- ☆135Updated 9 months ago
- AI management tool☆119Updated last year
- Easily view and modify JSON datasets for large language models☆87Updated 8 months ago
- Glyphs, acting as collaboratively defined symbols linking related concepts, add a layer of multidimensional semantic richness to user-AI …☆56Updated last year
- CaSIL is an advanced natural language processing system that implements a sophisticated four-layer semantic analysis architecture. It pro…☆67Updated last year
- ☆119Updated last year
- Thematic Generalization Benchmark: measures how effectively various LLMs can infer a narrow or specific "theme" (category/rule) from a sm…☆63Updated 4 months ago
- ☆209Updated last month
- Serving LLMs in the HF-Transformers format via a PyFlask API☆72Updated last year
- Distributed Inference for mlx LLm☆100Updated last year
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆150Updated last month
- After my server ui improvements were successfully merged, consider this repo a playground for experimenting, tinkering and hacking around…☆53Updated last year
- ☆134Updated 2 months ago
- ☆107Updated 3 months ago
- Conduct in-depth research with AI-driven insights : DeepDive is a command-line tool that leverages web searches and AI models to generate…☆44Updated last year
- Transplants vocabulary between language models, enabling the creation of draft models for speculative decoding WITHOUT retraining.☆49Updated 3 months ago
- Experimental LLM Inference UX to aid in creative writing☆128Updated last year
- ☆337Updated 6 months ago
- The DPAB-α Benchmark☆32Updated last year
- Hallucinations (Confabulations) Document-Based Benchmark for RAG. Includes human-verified questions and answers.☆243Updated 6 months ago
- ☆109Updated 5 months ago
- Deploy Apollo HF space locally☆40Updated last year
- "a towel is about the most massively useful thing an interstellar AI hitchhiker can have"☆48Updated last year
- A simple experiment on letting two local LLM have a conversation about anything!☆112Updated last year
- ☆166Updated 6 months ago
- LLM Divergent Thinking Creativity Benchmark. LLMs generate 25 unique words that start with a given letter with no connections to each oth…☆35Updated 10 months ago
- An extension that lets the AI take the wheel, allowing it to use the mouse and keyboard, recognize UI elements, and prompt itself :3...no…☆127Updated last year
- Adding a multi-text multi-speaker script (diffe) that is based on a script from asiff00 on issue 61 for Sesame: A Conversational Speech G…☆26Updated 10 months ago