lechmazur / writingLinks
This benchmark tests how well LLMs incorporate a set of 10 mandatory story elements (characters, objects, core concepts, attributes, motivations, etc.) in a short creative story
☆316Updated 2 months ago
Alternatives and similar repositories for writing
Users that are interested in writing are comparing it to the libraries listed below
Sorting:
- A benchmark for emotional intelligence in large language models☆382Updated last year
- ☆135Updated 6 months ago
- Hallucinations (Confabulations) Document-Based Benchmark for RAG. Includes human-verified questions and answers.☆236Updated 3 months ago
- ☆326Updated 3 months ago
- Multi-Agent Step Race Benchmark: Assessing LLM Collaboration and Deception Under Pressure. A multi-player “step-race” that challenges LLM…☆76Updated 2 months ago
- Public Goods Game (PGG) Benchmark: Contribute & Punish is a multi-agent benchmark that tests cooperative and self-interested strategies a…☆39Updated 7 months ago
- Benchmark that evaluates LLMs using 759 NYT Connections puzzles extended with extra trick words☆157Updated last week
- ☆288Updated 3 weeks ago
- Prompt-to-Leaderboard☆260Updated 6 months ago
- ☆69Updated 2 months ago
- ☆163Updated 3 months ago
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆145Updated 9 months ago
- Guaranteed Structured Output from any Language Model via Hierarchical State Machines☆145Updated last month
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAI☆221Updated last year
- klmbr - a prompt pre-processing technique to break through the barrier of entropy while generating text with LLMs☆84Updated last year
- Autonomously train research-agent LLMs on custom data using reinforcement learning and self-verification.☆671Updated 8 months ago
- A collection of prompts to challenge the reasoning abilities of large language models in presence of misguiding information☆452Updated 3 months ago
- ☆434Updated last year
- ☆158Updated 7 months ago
- ☆174Updated 11 months ago
- Thematic Generalization Benchmark: measures how effectively various LLMs can infer a narrow or specific "theme" (category/rule) from a sm…☆63Updated 2 months ago
- AI management tool☆121Updated last year
- Easily view and modify JSON datasets for large language models☆84Updated 6 months ago
- A benchmark for role-playing language models☆109Updated 5 months ago
- Efficient computer use agent powered by Meta Llama 4 Maverick☆45Updated 7 months ago
- [NeurIPS 2025] Atom of Thoughts for Markov LLM Test-Time Scaling☆596Updated 5 months ago
- Coding problems used in aider's polyglot benchmark☆190Updated 11 months ago
- ☆45Updated last year
- Verify Precision of all Kimi K2 API Vendor☆433Updated this week
- Force DeepSeek r1 models to think for as long as you wish☆371Updated 9 months ago