Pleias / Various-Finetuning
Set of scripts to finetune LLMs
☆37Updated last year
Alternatives and similar repositories for Various-Finetuning:
Users that are interested in Various-Finetuning are comparing it to the libraries listed below
- Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absolute…☆49Updated 9 months ago
- ☆129Updated 8 months ago
- Simple GRPO scripts and configurations.☆58Updated 3 months ago
- High level library for batched embeddings generation, blazingly-fast web-based RAG and quantized indexes processing ⚡☆66Updated 6 months ago
- ☆115Updated 3 weeks ago
- ☆43Updated 2 months ago
- ☆48Updated 5 months ago
- Using open source LLMs to build synthetic datasets for direct preference optimization☆61Updated last year
- ☆66Updated 11 months ago
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆33Updated last year
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated last year
- Source code of "How to Correctly do Semantic Backpropagation on Language-based Agentic Systems" 🤖☆67Updated 4 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆39Updated 3 months ago
- ☆49Updated last year
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆36Updated last year
- ☆87Updated last year
- The simplest, fastest repository for training/finetuning medium-sized xLSTMs.☆42Updated 11 months ago
- Small and Efficient Mathematical Reasoning LLMs☆71Updated last year
- ☆48Updated last year
- Simple examples using Argilla tools to build AI☆52Updated 5 months ago
- Lightweight demos for finetuning LLMs. Powered by 🤗 transformers and open-source datasets.☆76Updated 6 months ago
- An introduction to LLM Sampling☆77Updated 4 months ago
- Simple replication of [ColBERT-v1](https://arxiv.org/abs/2004.12832).☆80Updated last year
- Experimental Code for StructuredRAG: JSON Response Formatting with Large Language Models☆105Updated 3 weeks ago
- Train your own SOTA deductive reasoning model☆91Updated last month
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆98Updated last month
- ☆24Updated last year
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆231Updated 6 months ago
- A fast, local, and secure approach for training LLMs for coding tasks using GRPO with WebAssembly and interpreter feedback.☆22Updated last month
- ☆31Updated last month