kyegomez / Finetuning-SuiteLinks
Finetune any model on HF in less than 30 seconds
☆55Updated this week
Alternatives and similar repositories for Finetuning-Suite
Users that are interested in Finetuning-Suite are comparing it to the libraries listed below
Sorting:
- The Next Generation Multi-Modality Superintelligence☆69Updated last year
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆35Updated last year
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆69Updated 2 years ago
- A simple package for leveraging Falcon 180B and the HF ecosystem's tools, including training/inference scripts, safetensors, integrations…☆11Updated last year
- Using multiple LLMs for ensemble Forecasting☆16Updated last year
- HuggingChat like UI in Gradio☆70Updated 2 years ago
- ☆53Updated last year
- A data-centric AI package for ML/AI. Get the best high-quality data for the best results. Discord: https://discord.gg/t6ADqBKrdZ☆63Updated last year
- GPT-4 Level Conversational QA Trained In a Few Hours☆65Updated last year
- Data preparation code for CrystalCoder 7B LLM☆45Updated last year
- A Data Source for Reasoning Embodied Agents☆19Updated 2 years ago
- ☆35Updated 2 years ago
- ☆63Updated last year
- My personal implementation of the model from "Qwen-VL: A Frontier Large Vision-Language Model with Versatile Abilities", they haven't rel…☆11Updated last year
- ☆26Updated 2 years ago
- An EXA-Scale repository of Multi-Modality AI resources from papers and models, to foundational libraries!☆39Updated last year
- ☆55Updated 11 months ago
- The Benefits of a Concise Chain of Thought on Problem Solving in Large Language Models☆22Updated 10 months ago
- Mixing Language Models with Self-Verification and Meta-Verification☆109Updated 10 months ago
- Score LLM pretraining data with classifiers☆54Updated last year
- Fast approximate inference on a single GPU with sparsity aware offloading☆38Updated last year
- ☆50Updated last year
- An all-new Language Model That Processes Ultra-Long Sequences of 100,000+ Ultra-Fast☆149Updated last year
- Modified Beam Search with periodical restart☆12Updated last year
- Unofficial implementation and experiments related to Set-of-Mark (SoM) 👁️☆87Updated 2 years ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31Updated last year
- Experimental sampler to make LLMs more creative☆31Updated 2 years ago
- Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA☆103Updated 5 months ago
- ☆54Updated last month
- ☆21Updated last year