lechmazur / confabulationsLinks
Hallucinations (Confabulations) Document-Based Benchmark for RAG. Includes human-verified questions and answers.
☆230Updated 2 months ago
Alternatives and similar repositories for confabulations
Users that are interested in confabulations are comparing it to the libraries listed below
Sorting:
- ☆135Updated 5 months ago
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆145Updated 8 months ago
- Guaranteed Structured Output from any Language Model via Hierarchical State Machines☆145Updated 3 weeks ago
- AI management tool☆121Updated 11 months ago
- ☆277Updated 4 months ago
- A benchmark for emotional intelligence in large language models☆370Updated last year
- ☆319Updated 3 months ago
- Benchmark that evaluates LLMs using 759 NYT Connections puzzles extended with extra trick words☆155Updated 2 weeks ago
- klmbr - a prompt pre-processing technique to break through the barrier of entropy while generating text with LLMs☆80Updated last year
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAI☆221Updated last year
- Multi-Agent Step Race Benchmark: Assessing LLM Collaboration and Deception Under Pressure. A multi-player “step-race” that challenges LLM…☆76Updated 2 months ago
- Official repository for "DynaSaur: Large Language Agents Beyond Predefined Actions"☆349Updated 10 months ago
- ☆206Updated last month
- Official repository for "NoLiMa: Long-Context Evaluation Beyond Literal Matching"☆161Updated 3 months ago
- ☆158Updated 6 months ago
- Enhancing LLMs with LoRA☆172Updated last week
- This benchmark tests how well LLMs incorporate a set of 10 mandatory story elements (characters, objects, core concepts, attributes, moti…☆313Updated last month
- Transplants vocabulary between language models, enabling the creation of draft models for speculative decoding WITHOUT retraining.☆43Updated this week
- A simple tool that let's you explore different possible paths that an LLM might sample.☆190Updated 5 months ago
- ☆162Updated 2 months ago
- Easily view and modify JSON datasets for large language models☆83Updated 5 months ago
- Testing LLM reasoning abilities with family relationship quizzes.☆62Updated 9 months ago
- A multi-player tournament benchmark that tests LLMs in social reasoning, strategy, and deception. Players engage in public and private co…☆290Updated 2 months ago
- Conduct in-depth research with AI-driven insights : DeepDive is a command-line tool that leverages web searches and AI models to generate…☆42Updated last year
- Low-Rank adapter extraction for fine-tuned transformers models☆178Updated last year
- DevQualityEval: An evaluation benchmark 📈 and framework to compare and evolve the quality of code generation of LLMs.☆182Updated 5 months ago
- Thematic Generalization Benchmark: measures how effectively various LLMs can infer a narrow or specific "theme" (category/rule) from a sm…☆63Updated last month
- Public Goods Game (PGG) Benchmark: Contribute & Punish is a multi-agent benchmark that tests cooperative and self-interested strategies a…☆39Updated 6 months ago
- ☆105Updated this week
- ☆170Updated 10 months ago