felipemaiapolo / tinyBenchmarksLinks
Evaluating LLMs with fewer examples
☆161Updated last year
Alternatives and similar repositories for tinyBenchmarks
Users that are interested in tinyBenchmarks are comparing it to the libraries listed below
Sorting:
- Functional Benchmarks and the Reasoning Gap☆88Updated 11 months ago
- Code accompanying "How I learned to start worrying about prompt formatting".☆110Updated 3 months ago
- Benchmarking LLMs with Challenging Tasks from Real Users☆241Updated 10 months ago
- Official repository for "Scaling Retrieval-Based Langauge Models with a Trillion-Token Datastore".☆214Updated last month
- The official evaluation suite and dynamic data release for MixEval.☆245Updated 10 months ago
- ☆122Updated 6 months ago
- ☆100Updated last year
- Code for the EMNLP 2024 paper "Detecting and Mitigating Contextual Hallucinations in Large Language Models Using Only Attention Maps"☆130Updated last year
- ☆98Updated 10 months ago
- LongEmbed: Extending Embedding Models for Long Context Retrieval (EMNLP 2024)☆143Updated 10 months ago
- Replicating O1 inference-time scaling laws☆89Updated 9 months ago
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆182Updated 6 months ago
- ☆127Updated 11 months ago
- ☆188Updated 4 months ago
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆172Updated 7 months ago
- A simple unified framework for evaluating LLMs☆243Updated 4 months ago
- ☆72Updated last year
- Code for the paper "Fishing for Magikarp"☆165Updated 3 months ago
- The official repo for "LLoCo: Learning Long Contexts Offline"☆116Updated last year
- Scalable Meta-Evaluation of LLMs as Evaluators☆42Updated last year
- Scripts for generating synthetic finetuning data for reducing sycophancy.☆115Updated 2 years ago
- Public code repo for paper "SaySelf: Teaching LLMs to Express Confidence with Self-Reflective Rationales"☆110Updated 11 months ago
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆128Updated 2 months ago
- Verifiers for LLM Reinforcement Learning☆72Updated 4 months ago
- ☆80Updated last week
- Complex Function Calling Benchmark.☆132Updated 7 months ago
- Code repo for "Agent Instructs Large Language Models to be General Zero-Shot Reasoners"☆116Updated last year
- ☆150Updated last year
- This is the official repository for Inheritune.☆113Updated 7 months ago
- Systematic evaluation framework that automatically rates overthinking behavior in large language models.☆93Updated 3 months ago