clcarwin / alpaca-weight
Train llama with lora on one 4090 and merge weight of lora to work as stanford alpaca.
☆51Updated last year
Alternatives and similar repositories for alpaca-weight
Users that are interested in alpaca-weight are comparing it to the libraries listed below
Sorting:
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆123Updated last year
- ☆73Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆78Updated last year
- 4 bits quantization of SantaCoder using GPTQ☆51Updated last year
- minichatgpt - To Train ChatGPT In 5 Minutes☆167Updated last year
- Instruct-tune LLaMA on consumer hardware☆74Updated last year
- Merge Transformers language models by use of gradient parameters.☆208Updated 9 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆131Updated 10 months ago
- SparseGPT + GPTQ Compression of LLMs like LLaMa, OPT, Pythia☆41Updated 2 years ago
- Enhancing LangChain prompts to work better with RWKV models☆34Updated last year
- The data processing pipeline for the Koala chatbot language model☆117Updated 2 years ago
- Inference script for Meta's LLaMA models using Hugging Face wrapper☆110Updated 2 years ago
- Tune MPTs☆84Updated last year
- Instruct-tuning LLaMA on consumer hardware☆66Updated 2 years ago
- Automated prompting and scoring framework to evaluate LLMs using updated human knowledge prompts☆111Updated last year
- 4 bits quantization of LLaMa using GPTQ☆130Updated last year
- Open Instruction Generalist is an assistant trained on massive synthetic instructions to perform many millions of tasks☆208Updated last year
- ☆124Updated last year
- Inference code for facebook LLaMA models with Wrapyfi support☆130Updated 2 years ago
- ☆535Updated last year
- Patch for MPT-7B which allows using and training a LoRA☆58Updated last year
- Code for the paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot" with LLaMA implementation.☆71Updated 2 years ago
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆147Updated last year
- ☆458Updated last year
- A dataset featuring diverse dialogues between two ChatGPT (gpt-3.5-turbo) instances with system messages written by GPT-4. Covering vario…☆166Updated 2 years ago
- ☆82Updated last year
- ChatGPT-like Web UI for RWKVstic☆100Updated 2 years ago
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆69Updated last year
- Official codebase for "SelFee: Iterative Self-Revising LLM Empowered by Self-Feedback Generation"☆226Updated last year
- ☆42Updated 2 years ago