mlabonne / llm-autoevalLinks
Automatically evaluate your LLMs in Google Colab
☆683Updated last year
Alternatives and similar repositories for llm-autoeval
Users that are interested in llm-autoeval are comparing it to the libraries listed below
Sorting:
- A library for easily merging multiple LLM experts, and efficiently train the merged LLM.☆502Updated last year
- Evaluate your LLM's response with Prometheus and GPT4 💯☆1,038Updated 9 months ago
- Best practices for distilling large language models.☆602Updated 2 years ago
- Toolkit for attaching, training, saving and loading of new heads for transformer models☆294Updated 10 months ago
- awesome synthetic (text) datasets☆321Updated 3 weeks ago
- Banishing LLM Hallucinations Requires Rethinking Generalization☆276Updated last year
- [ACL'25] Official Code for LlamaDuo: LLMOps Pipeline for Seamless Migration from Service LLMs to Small-Scale Local LLMs☆314Updated 6 months ago
- ☆695Updated 9 months ago
- Domain Adapted Language Modeling Toolkit - E2E RAG☆333Updated last year
- Easily embed, cluster and semantically label text datasets☆592Updated last year
- Generate textbook-quality synthetic LLM pretraining data☆509Updated 2 years ago
- DataDreamer: Prompt. Generate Synthetic Data. Train & Align Models. 🤖💤☆1,088Updated 11 months ago
- Guide for fine-tuning Llama/Mistral/CodeLlama models and more☆649Updated 3 months ago
- 🤗 Benchmark Large Language Models Reliably On Your Data☆425Updated last month
- Build datasets using natural language☆559Updated 4 months ago
- ☆445Updated last year
- LLM Workshop by Sourab Mangrulkar☆401Updated last year
- An Open Source Toolkit For LLM Distillation☆846Updated last month
- ☆471Updated 2 years ago
- ☆561Updated last year
- In-Context Learning for eXtreme Multi-Label Classification (XMC) using only a handful of examples.☆446Updated last year
- A lightweight library for generating synthetic instruction tuning datasets for your data without GPT.☆818Updated 6 months ago
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆737Updated last year
- Official repository for ORPO☆469Updated last year
- Starter pack for NeurIPS LLM Efficiency Challenge 2023.☆129Updated 2 years ago
- A compact LLM pretrained in 9 days by using high quality data☆340Updated 9 months ago
- A bagel, with everything.☆326Updated last year
- Fine-Tuning Embedding for RAG with Synthetic Data☆524Updated 2 years ago
- This is our own implementation of 'Layer Selective Rank Reduction'☆240Updated last year
- Let's build better datasets, together!☆269Updated last year