QLoRA with Enhanced Multi GPU Support
☆38Aug 8, 2023Updated 2 years ago
Alternatives and similar repositories for qlora-multi-gpu
Users that are interested in qlora-multi-gpu are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆13Aug 23, 2024Updated last year
- GPTQLoRA: Efficient Finetuning of Quantized LLMs with GPTQ☆101May 30, 2023Updated 2 years ago
- ☆22Aug 27, 2023Updated 2 years ago
- Official PyTorch implementation of QA-LoRA☆145Mar 13, 2024Updated 2 years ago
- Datasets and code from our paper, where we use machine learning to predict if ChatGPT will refuse a given prompt.☆38Sep 23, 2023Updated 2 years ago
- This is a new metric that can be used to evaluate faithfulness of text generated by LLMs. The work behind this repository can be found he…☆31Aug 25, 2023Updated 2 years ago
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆83Sep 10, 2023Updated 2 years ago
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆34Mar 2, 2024Updated 2 years ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Jun 21, 2023Updated 2 years ago
- This repository contains code for cleaning your training data of benchmark data to help combat data snooping.☆28Apr 21, 2023Updated 2 years ago
- Build modern UIs in Jupyter with Python☆12Dec 28, 2022Updated 3 years ago
- Cuda extensions for PyTorch☆12Dec 2, 2025Updated 3 months ago
- A chat implementation for FastHTML☆12Sep 14, 2025Updated 6 months ago
- A library for squeakily cleaning and filtering language datasets.☆50Jul 10, 2023Updated 2 years ago
- ☆18Apr 3, 2023Updated 2 years ago
- Helpers and such for working with Lambda Cloud☆52Nov 7, 2023Updated 2 years ago
- Full finetuning of large language models without large memory requirements☆94Sep 22, 2025Updated 6 months ago
- Multipack distributed sampler for fast padding-free training of LLMs☆206Aug 10, 2024Updated last year
- ☆13Feb 18, 2024Updated 2 years ago
- a pipeline for using api calls to agnostically convert unstructured data into structured training data☆32Sep 22, 2024Updated last year
- Low-Rank adapter extraction for fine-tuned transformers models☆181May 2, 2024Updated last year
- Official Documentation for DSPy Library☆21Updated this week
- Generate textbook-quality synthetic LLM pretraining data☆509Oct 19, 2023Updated 2 years ago
- C++ inference wrappers for running blazing fast embedding services on your favourite serverless like AWS Lambda. By Prithivi Da, PRs welc…☆23Mar 4, 2024Updated 2 years ago
- Webpage of "Portrait4D-v2: Pseudo Multi-View Data Creates Better 4D Head Synthesizer"☆11Jul 2, 2024Updated last year
- A miniture AI training framework for PyTorch☆43Feb 1, 2025Updated last year
- Simplex Random Feature attention, in PyTorch☆76Oct 10, 2023Updated 2 years ago
- ☆63Sep 23, 2024Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆79Apr 10, 2024Updated last year
- ☆21Mar 3, 2025Updated last year
- ☆74Sep 5, 2023Updated 2 years ago
- Easily create LLM automation/agent workflows☆60Feb 13, 2024Updated 2 years ago
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆209May 20, 2024Updated last year
- ☆10Nov 19, 2023Updated 2 years ago
- Simple and fast server for GPTQ-quantized LLaMA inference☆24May 18, 2023Updated 2 years ago
- Limit amount of requests to your FastAPI.☆10Jul 4, 2023Updated 2 years ago
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆233Oct 31, 2024Updated last year
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆226Sep 18, 2025Updated 6 months ago
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆145Oct 17, 2023Updated 2 years ago