modal-labs / llm-finetuningLinks
Guide for fine-tuning Llama/Mistral/CodeLlama models and more
☆641Updated 2 months ago
Alternatives and similar repositories for llm-finetuning
Users that are interested in llm-finetuning are comparing it to the libraries listed below
Sorting:
- Automatically evaluate your LLMs in Google Colab☆677Updated last year
- ☆469Updated 2 years ago
- Generate textbook-quality synthetic LLM pretraining data☆508Updated 2 years ago
- ☆198Updated last year
- Fine-tune mistral-7B on 3090s, a100s, h100s☆722Updated 2 years ago
- A tool for evaluating LLMs☆428Updated last year
- Fine-Tuning Embedding for RAG with Synthetic Data☆521Updated 2 years ago
- ☆474Updated last year
- Domain Adapted Language Modeling Toolkit - E2E RAG☆334Updated last year
- Fast & more realistic evaluation of chat language models. Includes leaderboard.☆189Updated 2 years ago
- ☆446Updated last year
- LLM Workshop by Sourab Mangrulkar☆398Updated last year
- Scale LLM Engine public repository☆818Updated last week
- A benchmark to evaluate language models on questions I've previously asked them to solve.☆1,036Updated 8 months ago
- Customizable implementation of the self-instruct paper.☆1,051Updated last year
- In-Context Learning for eXtreme Multi-Label Classification (XMC) using only a handful of examples.☆444Updated last year
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆733Updated last year
- Toolkit for fine-tuning, ablating and unit-testing open-source LLMs.☆865Updated last year
- ☆583Updated last year
- Best practices for distilling large language models.☆595Updated last year
- ☆416Updated 2 years ago
- ☆170Updated last year
- Starter pack for NeurIPS LLM Efficiency Challenge 2023.☆129Updated 2 years ago
- ☆187Updated 2 years ago
- Evaluate your LLM's response with Prometheus and GPT4 💯☆1,022Updated 8 months ago
- Tune any FALCON in 4-bit☆465Updated 2 years ago
- [ACL'25] Official Code for LlamaDuo: LLMOps Pipeline for Seamless Migration from Service LLMs to Small-Scale Local LLMs☆314Updated 5 months ago
- A library for easily merging multiple LLM experts, and efficiently train the merged LLM.☆500Updated last year
- Fine-tuning LLMs using QLoRA☆266Updated last year
- batched loras☆347Updated 2 years ago