simple-bench / SimpleBench
☆76Updated last month
Alternatives and similar repositories for SimpleBench:
Users that are interested in SimpleBench are comparing it to the libraries listed below
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆129Updated last week
- MiniHF is an inference, human preference data collection, and fine-tuning tool for local language models. It is intended to help the user…☆164Updated this week
- Sandboxed code execution for AI agents, locally or on the cloud.☆74Updated this week
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAI☆222Updated 9 months ago
- ☆96Updated 4 months ago
- Aidan Bench attempts to measure <big_model_smell> in LLMs.☆273Updated this week
- smol models are fun too☆88Updated 3 months ago
- look how they massacred my boy☆63Updated 4 months ago
- An extension that lets the AI take the wheel, allowing it to use the mouse and keyboard, recognize UI elements, and prompt itself :3...no…☆111Updated 3 months ago
- A comprehensive repository of reasoning tasks for LLMs (and beyond)☆407Updated 4 months ago
- ☆111Updated last month
- Routing on Random Forest (RoRF)☆112Updated 4 months ago
- Fast parallel LLM inference for MLX☆162Updated 7 months ago
- ☆263Updated 3 weeks ago
- MLX port for xjdr's entropix sampler (mimics jax implementation)☆63Updated 3 months ago
- Easy to use, High Performant Knowledge Distillation for LLMs☆45Updated last month
- ☆112Updated 6 months ago
- Turn a Github Repo's contents into a big prompt for long-context models like Claude 3 Opus.☆171Updated 10 months ago
- Official homepage for "Self-Harmonized Chain of Thought" (NAACL 2025)☆89Updated 3 weeks ago
- A comprehensive set of LLM benchmark scores and provider prices.☆104Updated last week
- LLMs as Method Actors: A Model for Prompt Engineering and Architecture☆44Updated 3 months ago
- ☆152Updated 7 months ago
- a lightweight, open-source blueprint for building powerful and scalable LLM chat applications☆30Updated 8 months ago
- This repository explains and provides examples for "concept anchoring" in GPT4.☆72Updated last year
- A benchmark for emotional intelligence in large language models☆223Updated 6 months ago
- Just a bunch of benchmark logs for different LLMs☆119Updated 6 months ago
- Doing simple retrieval from LLM models at various context lengths to measure accuracy☆100Updated 10 months ago
- Function Calling Benchmark & Testing☆81Updated 7 months ago
- Draw more samples☆186Updated 7 months ago
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆167Updated last month