Repo for fine-tuning Casual LLMs
☆460Mar 27, 2024Updated 2 years ago
Alternatives and similar repositories for Finetune_LLMs
Users that are interested in Finetune_LLMs are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆64Oct 1, 2021Updated 4 years ago
- ☆27Aug 10, 2021Updated 4 years ago
- Guide: Finetune GPT2-XL (1.5 Billion Parameters) and finetune GPT-NEO (2.7 B) on a single GPU with Huggingface Transformers using DeepSpe…☆434Jun 14, 2023Updated 2 years ago
- Notebook for running GPT neo models based on GPT3☆61Aug 10, 2021Updated 4 years ago
- ☆15Mar 12, 2022Updated 4 years ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- Fine-tuning GPT-J-6B on colab or equivalent PC GPU with your custom datasets: 8-bit weights with low-rank adaptors (LoRA)☆73Jun 18, 2022Updated 3 years ago
- ☆34Aug 10, 2021Updated 4 years ago
- A gym environment to train chatbots.☆21May 19, 2022Updated 3 years ago
- ☆131Jun 9, 2022Updated 3 years ago
- Simple UI for LLM Model Finetuning☆2,055Dec 21, 2023Updated 2 years ago
- Model parallel transformers in JAX and Haiku☆6,366Jan 21, 2023Updated 3 years ago
- ☆457Oct 15, 2023Updated 2 years ago
- A GPT-J API to use with python3 to generate text, blogs, code, and more☆204Nov 12, 2022Updated 3 years ago
- ☆50Jan 4, 2023Updated 3 years ago
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- A repository to run gpt-j-6b on low vram machines (4.2 gb minimum vram for 2000 token context, 3.5 gb for 1000 token context). Model load…☆113Dec 23, 2021Updated 4 years ago
- Simple Annotated implementation of GPT-NeoX in PyTorch☆110Aug 11, 2022Updated 3 years ago
- 🤗Transformers: State-of-the-art Natural Language Processing for Pytorch and TensorFlow 2.0.☆56Jan 20, 2022Updated 4 years ago
- ☆27May 11, 2023Updated 2 years ago
- An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries☆7,416Apr 13, 2026Updated last week
- ☆536Dec 1, 2023Updated 2 years ago
- Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Ad…☆6,082Jul 1, 2025Updated 9 months ago
- Instruct-tune LLaMA on consumer hardware☆18,945Jul 29, 2024Updated last year
- Prompt tuning toolkit for GPT-2 and GPT-Neo☆90Sep 27, 2021Updated 4 years ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- Inference code for facebook LLaMA models with Wrapyfi support☆128Mar 16, 2023Updated 3 years ago
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆2,519Aug 13, 2024Updated last year
- Train transformer language models with reinforcement learning.☆18,054Updated this week
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆20,929Apr 10, 2026Updated last week
- Heuristic Imperatives Assessment Framework - Assessing Ethical Alignment in AI: A Framework for Measuring Adherence to Heuristic Imperati…☆21Apr 25, 2023Updated 2 years ago
- [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters☆5,925Mar 14, 2024Updated 2 years ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,870Jun 10, 2024Updated last year
- Happy Transformer makes it easy to fine-tune and perform inference with NLP Transformer models.☆544Jan 10, 2026Updated 3 months ago
- StableLM: Stability AI Language Models☆15,727Apr 8, 2024Updated 2 years ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- Go ahead and axolotl questions☆11,688Updated this week
- Code and documentation to train Stanford's Alpaca models, and generate the data.☆30,260Jul 17, 2024Updated last year
- Large Language Model Text Generation Inference☆10,841Mar 21, 2026Updated 3 weeks ago
- OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset☆7,535Jul 16, 2023Updated 2 years ago
- The data processing pipeline for the Koala chatbot language model☆118Apr 6, 2023Updated 3 years ago
- ☆17Mar 24, 2023Updated 3 years ago
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,743Jan 8, 2024Updated 2 years ago