carlini / yet-another-applied-llm-benchmarkLinks
A benchmark to evaluate language models on questions I've previously asked them to solve.
☆1,018Updated last month
Alternatives and similar repositories for yet-another-applied-llm-benchmark
Users that are interested in yet-another-applied-llm-benchmark are comparing it to the libraries listed below
Sorting:
- System 2 Reasoning Link Collection☆838Updated 3 months ago
- Deep learning for dummies. All the practical details and useful utilities that go into working with real models.☆801Updated last week
- ☆447Updated last year
- A library for making RepE control vectors☆610Updated 5 months ago
- Automatically evaluate your LLMs in Google Colab☆641Updated last year
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆2,411Updated 2 weeks ago
- Fine-tune mistral-7B on 3090s, a100s, h100s☆714Updated last year
- ☆864Updated last year
- Minimalistic large language model 3D-parallelism training☆1,926Updated last week
- ☆900Updated 9 months ago
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling☆881Updated last month
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI☆1,387Updated last year
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆1,629Updated this week
- ☆415Updated last year
- Generate textbook-quality synthetic LLM pretraining data☆500Updated last year
- A comprehensive repository of reasoning tasks for LLMs (and beyond)☆447Updated 8 months ago
- 🤖 A PyTorch library of curated Transformer models and their composable components☆891Updated last year
- Scale LLM Engine public repository☆803Updated last week
- procedural reasoning datasets☆841Updated last week
- ☆541Updated 9 months ago
- Doing simple retrieval from LLM models at various context lengths to measure accuracy☆1,897Updated 10 months ago
- Sharing both practical insights and theoretical knowledge about LLM evaluation that we gathered while managing the Open LLM Leaderboard a…☆1,434Updated 5 months ago
- Inspect: A framework for large language model evaluations☆1,035Updated this week
- LLM Analytics☆668Updated 8 months ago
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆2,757Updated last week
- Inference code for Persimmon-8B☆415Updated last year
- ☆1,025Updated 6 months ago
- Website for hosting the Open Foundation Models Cheat Sheet.☆267Updated last month
- YaRN: Efficient Context Window Extension of Large Language Models☆1,497Updated last year
- Official inference library for pre-processing of Mistral models☆742Updated this week