Repo for fine-tuning Casual LLMs
☆460Mar 27, 2024Updated 2 years ago
Alternatives and similar repositories for Finetune_LLMs
Users that are interested in Finetune_LLMs are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆64Oct 1, 2021Updated 4 years ago
- ☆27Aug 10, 2021Updated 4 years ago
- Guide: Finetune GPT2-XL (1.5 Billion Parameters) and finetune GPT-NEO (2.7 B) on a single GPU with Huggingface Transformers using DeepSpe…☆434Jun 14, 2023Updated 2 years ago
- Notebook for running GPT neo models based on GPT3☆61Aug 10, 2021Updated 4 years ago
- Fine-tuning GPT-J-6B on colab or equivalent PC GPU with your custom datasets: 8-bit weights with low-rank adaptors (LoRA)☆74Jun 18, 2022Updated 3 years ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- ☆34Aug 10, 2021Updated 4 years ago
- A gym environment to train chatbots.☆21May 19, 2022Updated 3 years ago
- ☆131Jun 9, 2022Updated 3 years ago
- Simple UI for LLM Model Finetuning☆2,061Dec 21, 2023Updated 2 years ago
- Model parallel transformers in JAX and Haiku☆6,365Jan 21, 2023Updated 3 years ago
- ☆457Oct 15, 2023Updated 2 years ago
- A GPT-J API to use with python3 to generate text, blogs, code, and more☆203Nov 12, 2022Updated 3 years ago
- ☆50Jan 4, 2023Updated 3 years ago
- A repository to run gpt-j-6b on low vram machines (4.2 gb minimum vram for 2000 token context, 3.5 gb for 1000 token context). Model load…☆113Dec 23, 2021Updated 4 years ago
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- Simple Annotated implementation of GPT-NeoX in PyTorch☆110Aug 11, 2022Updated 3 years ago
- 🤗Transformers: State-of-the-art Natural Language Processing for Pytorch and TensorFlow 2.0.☆56Jan 20, 2022Updated 4 years ago
- ☆27May 11, 2023Updated 2 years ago
- An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries☆7,400Feb 3, 2026Updated last month
- ☆535Dec 1, 2023Updated 2 years ago
- Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Ad…☆6,083Jul 1, 2025Updated 8 months ago
- Instruct-tune LLaMA on consumer hardware☆18,961Jul 29, 2024Updated last year
- Prompt tuning toolkit for GPT-2 and GPT-Neo☆90Sep 27, 2021Updated 4 years ago
- Inference code for facebook LLaMA models with Wrapyfi support☆129Mar 16, 2023Updated 3 years ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆2,521Aug 13, 2024Updated last year
- Train transformer language models with reinforcement learning.☆17,781Updated this week
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆20,841Mar 18, 2026Updated last week
- Heuristic Imperatives Assessment Framework - Assessing Ethical Alignment in AI: A Framework for Measuring Adherence to Heuristic Imperati…☆21Apr 25, 2023Updated 2 years ago
- [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters☆5,932Mar 14, 2024Updated 2 years ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,858Jun 10, 2024Updated last year
- Go ahead and axolotl questions☆11,508Updated this week
- Happy Transformer makes it easy to fine-tune and perform inference with NLP Transformer models.☆542Jan 10, 2026Updated 2 months ago
- StableLM: Stability AI Language Models☆15,749Apr 8, 2024Updated last year
- Open source password manager - Proton Pass • AdSecurely store, share, and autofill your credentials with Proton Pass, the end-to-end encrypted password manager trusted by millions.
- Code and documentation to train Stanford's Alpaca models, and generate the data.☆30,256Jul 17, 2024Updated last year
- Large Language Model Text Generation Inference☆10,812Jan 8, 2026Updated 2 months ago
- Tune LLaMa-7B on Alpaca Dataset using PEFT / LORA Based on @zphang's https://github.com/zphang/minimal-llama scripts.☆25Mar 15, 2023Updated 3 years ago
- OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset☆7,537Jul 16, 2023Updated 2 years ago
- The data processing pipeline for the Koala chatbot language model☆118Apr 6, 2023Updated 2 years ago
- ☆10Apr 3, 2024Updated last year
- ☆17Mar 24, 2023Updated 3 years ago