felipemaiapolo / tinyBenchmarksLinks
Evaluating LLMs with fewer examples
☆169Updated last year
Alternatives and similar repositories for tinyBenchmarks
Users that are interested in tinyBenchmarks are comparing it to the libraries listed below
Sorting:
- Benchmarking LLMs with Challenging Tasks from Real Users☆246Updated last year
- Functional Benchmarks and the Reasoning Gap☆89Updated last year
- ☆123Updated 11 months ago
- Code accompanying "How I learned to start worrying about prompt formatting".☆115Updated 8 months ago
- ☆99Updated last year
- ☆130Updated last year
- ☆91Updated last month
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆189Updated 11 months ago
- Replicating O1 inference-time scaling laws☆92Updated last year
- Official repository for "Scaling Retrieval-Based Langauge Models with a Trillion-Token Datastore".☆224Updated last month
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆91Updated last year
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆175Updated last year
- ☆203Updated 9 months ago
- A simple unified framework for evaluating LLMs☆261Updated 9 months ago
- Code for the EMNLP 2024 paper "Detecting and Mitigating Contextual Hallucinations in Large Language Models Using Only Attention Maps"☆142Updated 3 months ago
- ☆74Updated last year
- The official repo for "LLoCo: Learning Long Contexts Offline"☆118Updated last year
- ☆107Updated last year
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆71Updated last year
- Repository for the paper Stream of Search: Learning to Search in Language☆153Updated last year
- Code for the paper "Fishing for Magikarp"☆179Updated 8 months ago
- The official evaluation suite and dynamic data release for MixEval.☆255Updated last year
- BABILong is a benchmark for LLM evaluation using the needle-in-a-haystack approach.☆238Updated 5 months ago
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆316Updated 2 years ago
- Scalable Meta-Evaluation of LLMs as Evaluators☆43Updated last year
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆78Updated last year
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning" [TMLR2025]☆111Updated 11 months ago
- Codebase accompanying the Summary of a Haystack paper.☆80Updated last year
- Evaluating LLMs with CommonGen-Lite☆94Updated last year
- Scripts for generating synthetic finetuning data for reducing sycophancy.☆119Updated 2 years ago