mlabonne / llm-autoevalLinks
Automatically evaluate your LLMs in Google Colab
β641Updated last year
Alternatives and similar repositories for llm-autoeval
Users that are interested in llm-autoeval are comparing it to the libraries listed below
Sorting:
- Evaluate your LLM's response with Prometheus and GPT4 π―β952Updated last month
- β520Updated 7 months ago
- Banishing LLM Hallucinations Requires Rethinking Generalizationβ276Updated 11 months ago
- awesome synthetic (text) datasetsβ282Updated 7 months ago
- A library for easily merging multiple LLM experts, and efficiently train the merged LLM.β482Updated 9 months ago
- Stanford NLP Python library for Representation Finetuning (ReFT)β1,490Updated 4 months ago
- π€ Benchmark Large Language Models Reliably On Your Dataβ329Updated this week
- Extend existing LLMs way beyond the original training length with constant memory usage, without retrainingβ698Updated last year
- Official repository for ORPOβ455Updated last year
- Domain Adapted Language Modeling Toolkit - E2E RAGβ322Updated 7 months ago
- Best practices for distilling large language models.β553Updated last year
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"β303Updated last year
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backendsβ1,629Updated this week
- Fast & more realistic evaluation of chat language models. Includes leaderboard.β187Updated last year
- In-Context Learning for eXtreme Multi-Label Classification (XMC) using only a handful of examples.β426Updated last year
- This is our own implementation of 'Layer Selective Rank Reduction'β239Updated last year
- A lightweight library for generating synthetic instruction tuning datasets for your data without GPT.β778Updated 3 months ago
- Generate textbook-quality synthetic LLM pretraining dataβ500Updated last year
- Automatic evals for LLMsβ429Updated 2 weeks ago
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifiβ¦β2,757Updated last week
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAIβ222Updated last year
- β447Updated last year
- Guide for fine-tuning Llama/Mistral/CodeLlama models and moreβ602Updated last month
- A bagel, with everything.β321Updated last year
- Fine-tune mistral-7B on 3090s, a100s, h100sβ714Updated last year
- YaRN: Efficient Context Window Extension of Large Language Modelsβ1,497Updated last year
- Build datasets using natural languageβ492Updated last month
- This project showcases an LLMOps pipeline that fine-tunes a small-size LLM model to prepare for the outage of the service LLM.β306Updated 2 months ago
- Automated Evaluation of RAG Systemsβ609Updated 2 months ago
- Let's build better datasets, together!β259Updated 6 months ago