mlabonne / llm-autoeval
Automatically evaluate your LLMs in Google Colab
β620Updated 11 months ago
Alternatives and similar repositories for llm-autoeval:
Users that are interested in llm-autoeval are comparing it to the libraries listed below
- β515Updated 5 months ago
- Evaluate your LLM's response with Prometheus and GPT4 π―β930Updated last week
- A library for easily merging multiple LLM experts, and efficiently train the merged LLM.β472Updated 8 months ago
- β444Updated last year
- Banishing LLM Hallucinations Requires Rethinking Generalizationβ273Updated 9 months ago
- awesome synthetic (text) datasetsβ278Updated 6 months ago
- A bagel, with everything.β320Updated last year
- An Open Source Toolkit For LLM Distillationβ586Updated this week
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backendsβ1,482Updated this week
- Best practices for distilling large language models.β528Updated last year
- A comprehensive repository of reasoning tasks for LLMs (and beyond)β437Updated 7 months ago
- This is our own implementation of 'Layer Selective Rank Reduction'β237Updated 11 months ago
- Toolkit for attaching, training, saving and loading of new heads for transformer modelsβ276Updated 2 months ago
- This project showcases an LLMOps pipeline that fine-tunes a small-size LLM model to prepare for the outage of the service LLM.β304Updated last month
- Official repository for ORPOβ450Updated 11 months ago
- β529Updated 8 months ago
- Generate textbook-quality synthetic LLM pretraining dataβ498Updated last year
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"β301Updated last year
- β863Updated 7 months ago
- Fine-tune mistral-7B on 3090s, a100s, h100sβ711Updated last year
- Steer LLM outputs towards a certain topic/subject and enhance response capabilities using activation engineering by adding steering vectoβ¦β235Updated 2 months ago
- Easily embed, cluster and semantically label text datasetsβ530Updated last year
- Manage scalable open LLM inference endpoints in Slurm clustersβ254Updated 9 months ago
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language Mβ¦β215Updated 6 months ago
- Fast & more realistic evaluation of chat language models. Includes leaderboard.β186Updated last year
- The official evaluation suite and dynamic data release for MixEval.β238Updated 5 months ago
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for freeβ231Updated 6 months ago
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifiβ¦β2,671Updated last week
- Tutorial for building LLM routerβ198Updated 9 months ago
- Domain Adapted Language Modeling Toolkit - E2E RAGβ320Updated 5 months ago