lechmazur / step_game
Multi-Agent Step Race Benchmark: Assessing LLM Collaboration and Deception Under Pressure. A multi-player “step-race” that challenges LLMs to engage in public conversation before secretly picking a move (1, 3, or 5 steps). Whenever two or more players choose the same number, all colliding players fail to advance.
☆49Updated 2 weeks ago
Alternatives and similar repositories for step_game:
Users that are interested in step_game are comparing it to the libraries listed below
- Benchmark evaluating LLMs on their ability to create and resist disinformation. Includes comprehensive testing across major models (Claud…☆26Updated last month
- klmbr - a prompt pre-processing technique to break through the barrier of entropy while generating text with LLMs☆71Updated 7 months ago
- LLM Divergent Thinking Creativity Benchmark. LLMs generate 25 unique words that start with a given letter with no connections to each oth…☆32Updated last month
- CaSIL is an advanced natural language processing system that implements a sophisticated four-layer semantic analysis architecture. It pro…☆65Updated 5 months ago
- Experimental LLM Inference UX to aid in creative writing☆116Updated 4 months ago
- Distributed Inference for mlx LLm☆87Updated 8 months ago
- ☆112Updated 4 months ago
- ☆60Updated this week
- A frontend for creative writing with LLMs☆123Updated 9 months ago
- Benchmark that evaluates LLMs using 651 NYT Connections puzzles extended with extra trick words☆80Updated last week
- SLOP Detector and analyzer based on dictionary for shareGPT JSON and text☆67Updated 5 months ago
- Conduct in-depth research with AI-driven insights : DeepDive is a command-line tool that leverages web searches and AI models to generate…☆42Updated 7 months ago
- Guaranteed Structured Output from any Language Model via Hierarchical State Machines☆124Updated last week
- Thematic Generalization Benchmark: measures how effectively various LLMs can infer a narrow or specific "theme" (category/rule) from a sm…☆44Updated last week
- AI management tool☆114Updated 5 months ago
- Run multiple resource-heavy Large Models (LM) on the same machine with limited amount of VRAM/other resources by exposing them on differe…☆56Updated 2 months ago
- ☆284Updated 3 weeks ago
- Serving LLMs in the HF-Transformers format via a PyFlask API☆71Updated 7 months ago
- Easily view and modify JSON datasets for large language models☆74Updated last month
- This small API downloads and exposes access to NeuML's txtai-wikipedia and full wikipedia datasets, taking in a query and returning full …☆90Updated 3 weeks ago
- idea: https://github.com/nyxkrage/ebook-groupchat/☆86Updated 8 months ago
- Orpheus Chat WebUI☆52Updated 3 weeks ago
- Hallucinations (Confabulations) Document-Based Benchmark for RAG. Includes human-verified questions and answers.☆124Updated last week
- Adding a multi-text multi-speaker script (diffe) that is based on a script from asiff00 on issue 61 for Sesame: A Conversational Speech G…☆23Updated 3 weeks ago
- A python package for serving LLM on OpenAI-compatible API endpoints with prompt caching using MLX.☆77Updated 4 months ago
- Glyphs, acting as collaboratively defined symbols linking related concepts, add a layer of multidimensional semantic richness to user-AI …☆49Updated 2 months ago
- ☆69Updated last month
- a lightweight, open-source blueprint for building powerful and scalable LLM chat applications☆28Updated 10 months ago
- This project is a reverse-engineered version of Figma's tone changer. It uses Groq's Llama-3-8b for high-speed inference and to adjust th…☆89Updated 9 months ago
- Public Goods Game (PGG) Benchmark: Contribute & Punish is a multi-agent benchmark that tests cooperative and self-interested strategies a…☆35Updated 2 weeks ago