huggingface / evaluation-guidebook
Sharing both practical insights and theoretical knowledge about LLM evaluation that we gathered while managing the Open LLM Leaderboard and designing lighteval!
β1,299Updated 3 months ago
Alternatives and similar repositories for evaluation-guidebook:
Users that are interested in evaluation-guidebook are comparing it to the libraries listed below
- A reading list on LLM based Synthetic Data Generation π₯β1,255Updated 2 months ago
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifiβ¦β2,671Updated last week
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backendsβ1,482Updated this week
- Textbook on reinforcement learning from human feedbackβ855Updated this week
- A lightweight, low-dependency, unified API to use all common reranking and cross-encoder models.β1,400Updated last month
- Synthetic data curation for post-training and structured data extractionβ1,290Updated this week
- Curated list of datasets and tools for post-training.β3,002Updated 3 months ago
- β643Updated this week
- System 2 Reasoning Link Collectionβ828Updated last month
- Minimalistic large language model 3D-parallelism trainingβ1,836Updated this week
- Recipes for shrinking, optimizing, customizing cutting edge vision models. πβ1,417Updated last month
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.β2,366Updated this week
- Build datasets using natural languageβ465Updated 2 months ago
- Bringing BERT into modernity via both architecture changes and scalingβ1,342Updated last month
- Best practices for distilling large language models.β528Updated last year
- Verifiers for LLM Reinforcement Learningβ881Updated last month
- TextGrad: Automatic ''Differentiation'' via Text -- using large language models to backpropagate textual gradients.β2,491Updated last month
- A library for advanced large language model reasoningβ2,113Updated 3 weeks ago
- Stanford NLP Python library for Representation Finetuning (ReFT)β1,464Updated 2 months ago
- β1,172Updated 2 months ago
- Optimizing inference proxy for LLMsβ2,201Updated last week
- Evaluate your LLM's response with Prometheus and GPT4 π―β930Updated last week
- β1,656Updated this week
- AllenAI's post-training codebaseβ2,939Updated this week
- Recipes to scale inference-time compute of open modelsβ1,066Updated 2 months ago
- π€ Benchmark Large Language Models Reliably On Your Dataβ281Updated this week
- Implementing the 4 agentic patterns from scratchβ1,259Updated last month
- Use late-interaction multi-modal models such as ColPali in just a few lines of code.β776Updated 3 months ago
- Minimalistic 4D-parallelism distributed training framework for education purposeβ1,346Updated last month
- Automatically evaluate your LLMs in Google Colabβ620Updated 11 months ago