johnsmith0031 / alpaca_lora_4bitLinks
☆534Updated last year
Alternatives and similar repositories for alpaca_lora_4bit
Users that are interested in alpaca_lora_4bit are comparing it to the libraries listed below
Sorting:
- Tune any FALCON in 4-bit☆466Updated last year
- ☆457Updated last year
- Landmark Attention: Random-Access Infinite Context Length for Transformers☆423Updated last year
- Customizable implementation of the self-instruct paper.☆1,043Updated last year
- Finetuning Large Language Models on One Consumer GPU in 2 Bits☆721Updated last year
- LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions☆821Updated 2 years ago
- This repository contains code for extending the Stanford Alpaca synthetic instruction tuning to existing instruction-tuned models such as…☆352Updated last year
- Alpaca dataset from Stanford, cleaned and curated☆1,554Updated 2 years ago
- Quantized inference code for LLaMA models☆1,049Updated 2 years ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆122Updated last year
- SoTA Transformers with C-backend for fast inference on your CPU.☆309Updated last year
- C++ implementation for BLOOM☆809Updated 2 years ago
- [ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization☆689Updated 9 months ago
- fastLLaMa: An experimental high-performance framework for running Decoder-only LLMs with 4-bit quantization in Python using a C/C++ backe…☆408Updated 2 years ago
- Code for fine-tuning Platypus fam LLMs using LoRA☆628Updated last year
- Repo for fine-tuning Casual LLMs☆456Updated last year
- Official repository for LongChat and LongEval☆518Updated last year
- ☆406Updated 2 years ago
- RWKV is a RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best …☆412Updated last year
- LLM that combines the principles of wizardLM and vicunaLM☆715Updated 2 years ago
- Fine-tune mistral-7B on 3090s, a100s, h100s☆713Updated last year
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆697Updated last year
- ☆412Updated last year
- 4 bits quantization of LLaMA using GPTQ☆3,050Updated 10 months ago
- Fast Inference Solutions for BLOOM☆564Updated 7 months ago
- Falcon LLM ggml framework with CPU and GPU support☆245Updated last year
- Simple, hackable and fast implementation for training/finetuning medium-sized LLaMA-based models☆171Updated last month
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆147Updated last year
- LOMO: LOw-Memory Optimization☆986Updated 11 months ago
- Train llama with lora on one 4090 and merge weight of lora to work as stanford alpaca.☆51Updated last year