felipemaiapolo / tinyBenchmarksLinks
Evaluating LLMs with fewer examples
☆167Updated last year
Alternatives and similar repositories for tinyBenchmarks
Users that are interested in tinyBenchmarks are comparing it to the libraries listed below
Sorting:
- Code accompanying "How I learned to start worrying about prompt formatting".☆110Updated 5 months ago
- Functional Benchmarks and the Reasoning Gap☆89Updated last year
- Benchmarking LLMs with Challenging Tasks from Real Users☆244Updated last year
- ☆124Updated 8 months ago
- ☆129Updated last year
- The official evaluation suite and dynamic data release for MixEval.☆252Updated last year
- ☆75Updated last year
- Official repository for "Scaling Retrieval-Based Langauge Models with a Trillion-Token Datastore".☆218Updated last week
- ☆81Updated this week
- LongEmbed: Extending Embedding Models for Long Context Retrieval (EMNLP 2024)☆144Updated last year
- ☆88Updated this week
- ☆103Updated last year
- Replicating O1 inference-time scaling laws☆90Updated 11 months ago
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆189Updated 8 months ago
- ☆100Updated last year
- Codebase accompanying the Summary of a Haystack paper.☆79Updated last year
- Code for the EMNLP 2024 paper "Detecting and Mitigating Contextual Hallucinations in Large Language Models Using Only Attention Maps"☆137Updated last month
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆173Updated 9 months ago
- A simple unified framework for evaluating LLMs☆254Updated 6 months ago
- LOFT: A 1 Million+ Token Long-Context Benchmark☆219Updated 5 months ago
- ☆197Updated 6 months ago
- Scripts for generating synthetic finetuning data for reducing sycophancy.☆117Updated 2 years ago
- Scalable Meta-Evaluation of LLMs as Evaluators☆42Updated last year
- Evaluating LLMs with CommonGen-Lite☆91Updated last year
- The official repo for "LLoCo: Learning Long Contexts Offline"☆118Updated last year
- Public code repo for paper "SaySelf: Teaching LLMs to Express Confidence with Self-Reflective Rationales"☆109Updated last year
- Systematic evaluation framework that automatically rates overthinking behavior in large language models.☆94Updated 5 months ago
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆75Updated last year
- Repository for the paper Stream of Search: Learning to Search in Language☆151Updated 9 months ago
- 🚢 Data Toolkit for Sailor Language Models☆94Updated 8 months ago