clcarwin / alpaca-weight
Train llama with lora on one 4090 and merge weight of lora to work as stanford alpaca.
☆50Updated last year
Alternatives and similar repositories for alpaca-weight:
Users that are interested in alpaca-weight are comparing it to the libraries listed below
- 4 bits quantization of SantaCoder using GPTQ☆53Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆130Updated 7 months ago
- minichatgpt - To Train ChatGPT In 5 Minutes☆167Updated last year
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆123Updated last year
- Automated prompting and scoring framework to evaluate LLMs using updated human knowledge prompts☆111Updated last year
- Langport is a language model inference service☆94Updated 4 months ago
- ☆456Updated last year
- Instruct-tune LLaMA on consumer hardware with shareGPT data☆122Updated last year
- 4 bits quantization of LLaMa using GPTQ☆131Updated last year
- Official repository for LongChat and LongEval☆519Updated 8 months ago
- A dataset featuring diverse dialogues between two ChatGPT (gpt-3.5-turbo) instances with system messages written by GPT-4. Covering vario…☆165Updated last year
- Instruct-tuning LLaMA on consumer hardware☆66Updated last year
- Merge Transformers language models by use of gradient parameters.☆203Updated 5 months ago
- Instruct-tune LLaMA on consumer hardware☆73Updated last year
- Inference code for facebook LLaMA models with Wrapyfi support☆130Updated last year
- ☆81Updated 8 months ago
- MultilingualShareGPT, the free multi-language corpus for LLM training☆73Updated last year
- Inference script for Meta's LLaMA models using Hugging Face wrapper☆111Updated last year
- Experiments on speculative sampling with Llama models☆123Updated last year
- A finetuning pipeline for instruct tuning Raven 14bn using QLORA 4bit and the Ditty finetuning library☆28Updated 7 months ago
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆146Updated last year
- RWKV is a RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best …☆410Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆77Updated 9 months ago
- ☆122Updated last year
- The data processing pipeline for the Koala chatbot language model☆117Updated last year
- ChatGPT-like Web UI for RWKVstic☆100Updated last year
- Visual Studio Code extension for WizardCoder☆145Updated last year
- ☆536Updated last year
- ☆266Updated last year
- Python bindings for llama.cpp☆198Updated last year