Repo for fine-tuning Casual LLMs
☆458Mar 27, 2024Updated last year
Alternatives and similar repositories for Finetune_LLMs
Users that are interested in Finetune_LLMs are comparing it to the libraries listed below
Sorting:
- ☆64Oct 1, 2021Updated 4 years ago
- ☆27Aug 10, 2021Updated 4 years ago
- Guide: Finetune GPT2-XL (1.5 Billion Parameters) and finetune GPT-NEO (2.7 B) on a single GPU with Huggingface Transformers using DeepSpe…☆434Jun 14, 2023Updated 2 years ago
- Notebook for running GPT neo models based on GPT3☆62Aug 10, 2021Updated 4 years ago
- ☆34Aug 10, 2021Updated 4 years ago
- ☆15Mar 12, 2022Updated 3 years ago
- 🤗Transformers: State-of-the-art Natural Language Processing for Pytorch and TensorFlow 2.0.☆56Jan 20, 2022Updated 4 years ago
- Simple Annotated implementation of GPT-NeoX in PyTorch☆110Aug 11, 2022Updated 3 years ago
- ☆50Jan 4, 2023Updated 3 years ago
- A GPT-J API to use with python3 to generate text, blogs, code, and more☆203Nov 12, 2022Updated 3 years ago
- Simple UI for LLM Model Finetuning☆2,062Dec 21, 2023Updated 2 years ago
- ☆131Jun 9, 2022Updated 3 years ago
- Model parallel transformers in JAX and Haiku☆6,363Jan 21, 2023Updated 3 years ago
- Prompt tuning toolkit for GPT-2 and GPT-Neo☆90Sep 27, 2021Updated 4 years ago
- An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries☆7,395Feb 3, 2026Updated last month
- ☆535Dec 1, 2023Updated 2 years ago
- ☆457Oct 15, 2023Updated 2 years ago
- Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Ad…☆6,082Jul 1, 2025Updated 8 months ago
- Instruct-tune LLaMA on consumer hardware☆18,972Jul 29, 2024Updated last year
- A repository to run gpt-j-6b on low vram machines (4.2 gb minimum vram for 2000 token context, 3.5 gb for 1000 token context). Model load…☆113Dec 23, 2021Updated 4 years ago
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆2,517Aug 13, 2024Updated last year
- Inference code for facebook LLaMA models with Wrapyfi support☆129Mar 16, 2023Updated 2 years ago
- Chatbot with personality using DialogGPT from huggingface (Rick bot).☆15Oct 6, 2021Updated 4 years ago
- Train transformer language models with reinforcement learning.☆17,523Updated this week
- Happy Transformer makes it easy to fine-tune and perform inference with NLP Transformer models.☆542Jan 10, 2026Updated last month
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆20,717Updated this week
- Go ahead and axolotl questions☆11,395Updated this week
- Codebase for fine-tuning Llama2 70B to generate math test questions and answers.☆11Aug 30, 2024Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,843Jun 10, 2024Updated last year
- Large Language Model Text Generation Inference☆10,788Jan 8, 2026Updated last month
- ☆17Mar 24, 2023Updated 2 years ago
- ☆27May 11, 2023Updated 2 years ago
- Heuristic Imperatives Assessment Framework - Assessing Ethical Alignment in AI: A Framework for Measuring Adherence to Heuristic Imperati…☆21Apr 25, 2023Updated 2 years ago
- Code and documentation to train Stanford's Alpaca models, and generate the data.☆30,267Jul 17, 2024Updated last year
- StableLM: Stability AI Language Models☆15,756Apr 8, 2024Updated last year
- [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters☆5,933Mar 14, 2024Updated last year
- OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset☆7,533Jul 16, 2023Updated 2 years ago
- coded with and corrected by Google Anti-Gravity☆13Nov 23, 2025Updated 3 months ago
- ☆22Apr 6, 2023Updated 2 years ago