georgian-io / LLM-Finetuning-Toolkit
Toolkit for fine-tuning, ablating and unit-testing open-source LLMs.
β825Updated 4 months ago
Alternatives and similar repositories for LLM-Finetuning-Toolkit:
Users that are interested in LLM-Finetuning-Toolkit are comparing it to the libraries listed below
- An LLM-powered advanced RAG pipeline built from scratchβ827Updated last year
- A comprehensive guide to building RAG-based LLM applications for production.β1,772Updated 7 months ago
- Evaluate your LLM's response with Prometheus and GPT4 π―β877Updated last month
- π LangKit: An open-source toolkit for monitoring Large Language Models (LLMs). π Extracts signals from prompts & responses, ensuring saβ¦β882Updated 3 months ago
- Domain Adapted Language Modeling Toolkit - E2E RAGβ314Updated 3 months ago
- Fine-Tuning Embedding for RAG with Synthetic Dataβ487Updated last year
- Best practices for distilling large language models.β491Updated last year
- Efficient Retrieval Augmentation and Generation Frameworkβ1,472Updated last month
- A lightweight library for generating synthetic instruction tuning datasets for your data without GPT.β743Updated this week
- Multi-LoRA inference server that scales to 1000s of fine-tuned LLMsβ2,377Updated last week
- Automatically evaluate your LLMs in Google Colabβ593Updated 9 months ago
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifiβ¦β2,517Updated this week
- LLM Analyticsβ643Updated 4 months ago
- A tool for evaluating LLMsβ403Updated 9 months ago
- SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Modelsβ497Updated 8 months ago
- β446Updated last year
- Fine-tune mistral-7B on 3090s, a100s, h100sβ706Updated last year
- List of papers on hallucination detection in LLMs.β785Updated last week
- Automated Evaluation of RAG Systemsβ554Updated 4 months ago
- Generate textbook-quality synthetic LLM pretraining dataβ498Updated last year
- DataDreamer: Prompt. Generate Synthetic Data. Train & Align Models. β π€π€β973Updated last month
- β815Updated 5 months ago
- A lightweight, low-dependency, unified API to use all common reranking and cross-encoder models.β1,314Updated last week
- Tune any FALCON in 4-bitβ466Updated last year
- A joint community effort to create one central leaderboard for LLMs.β292Updated 6 months ago
- A library for easily merging multiple LLM experts, and efficiently train the merged LLM.β445Updated 6 months ago
- Evaluation and Tracking for LLM Experimentsβ2,352Updated this week
- Stanford NLP Python library for Representation Finetuning (ReFT)β1,431Updated 3 weeks ago
- β1,492Updated last week
- Guide for fine-tuning Llama/Mistral/CodeLlama models and moreβ568Updated 6 months ago