clcarwin / alpaca-weight
Train llama with lora on one 4090 and merge weight of lora to work as stanford alpaca.
☆50Updated last year
Related projects ⓘ
Alternatives and complementary repositories for alpaca-weight
- A high-throughput and memory-efficient inference and serving engine for LLMs☆130Updated 4 months ago
- Instruct-tune LLaMA on consumer hardware with shareGPT data☆121Updated last year
- ☆81Updated 6 months ago
- Enhancing LangChain prompts to work better with RWKV models☆34Updated last year
- 4 bits quantization of SantaCoder using GPTQ☆53Updated last year
- Instruct-tune LLaMA on consumer hardware☆73Updated last year
- Code for the paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot" with LLaMA implementation.☆70Updated last year
- Example of Alpaca-LoRA with llama index.☆31Updated last year
- ChatGPT-like Web UI for RWKVstic☆100Updated last year
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆145Updated last year
- Inference script for Meta's LLaMA models using Hugging Face wrapper☆111Updated last year
- ☆72Updated last year
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆124Updated last year
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees" adapted for Llama models☆36Updated last year
- Langport is a language model inference service☆93Updated 2 months ago
- minichatgpt - To Train ChatGPT In 5 Minutes☆167Updated last year
- Official repository for LongChat and LongEval☆512Updated 6 months ago
- MultilingualShareGPT, the free multi-language corpus for LLM training☆72Updated last year
- Implementation of Toolformer: Language Models Can Teach Themselves to Use Tools☆136Updated last year
- GPTQLoRA: Efficient Finetuning of Quantized LLMs with GPTQ☆97Updated last year
- RWKV is a RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best …☆407Updated last year
- A torchless, c++ rwkv implementation using 8bit quantization, written in cuda/hip/vulkan for maximum compatibility and minimum dependenci…☆307Updated 9 months ago
- 4 bits quantization of LLaMa using GPTQ☆130Updated last year
- A finetuning pipeline for instruct tuning Raven 14bn using QLORA 4bit and the Ditty finetuning library☆28Updated 5 months ago
- A pipeline parallel training script for LLMs.☆83Updated this week
- 📖 — Notebooks related to RWKV☆59Updated last year
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆112Updated last year
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆144Updated 9 months ago