clcarwin / alpaca-weightLinks
Train llama with lora on one 4090 and merge weight of lora to work as stanford alpaca.
☆51Updated 2 years ago
Alternatives and similar repositories for alpaca-weight
Users that are interested in alpaca-weight are comparing it to the libraries listed below
Sorting:
- 4 bits quantization of SantaCoder using GPTQ☆51Updated 2 years ago
- Instruct-tune LLaMA on consumer hardware☆74Updated 2 years ago
- SparseGPT + GPTQ Compression of LLMs like LLaMa, OPT, Pythia☆41Updated 2 years ago
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆147Updated last year
- Instruct-tune LLaMA on consumer hardware with shareGPT data☆126Updated 2 years ago
- Langport is a language model inference service☆93Updated 9 months ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆123Updated 2 years ago
- MultilingualShareGPT, the free multi-language corpus for LLM training☆72Updated 2 years ago
- Automated prompting and scoring framework to evaluate LLMs using updated human knowledge prompts☆110Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆78Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆131Updated last year
- Inference script for Meta's LLaMA models using Hugging Face wrapper☆110Updated 2 years ago
- Experiments on speculative sampling with Llama models☆128Updated 2 years ago
- Inference code for LLaMA 2 models☆30Updated 11 months ago
- 4 bits quantization of LLaMa using GPTQ☆129Updated 2 years ago
- ☆73Updated last year
- ChatGPT-like Web UI for RWKVstic☆100Updated 2 years ago
- ☆82Updated last year
- LoRA weights for Cerebras-GPT-2.7b finetuned on Alpaca dataset with shorter prompt☆63Updated 2 years ago
- ☆458Updated last year
- Merge Transformers language models by use of gradient parameters.☆206Updated 10 months ago
- Inference code for facebook LLaMA models with Wrapyfi support☆129Updated 2 years ago
- Notus is a collection of fine-tuned LLMs using SFT, DPO, SFT+DPO, and/or any other RLHF techniques, while always keeping a data-first app…☆168Updated last year
- Code for the paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot" with LLaMA implementation.☆71Updated 2 years ago
- Implementation of Toolformer: Language Models Can Teach Themselves to Use Tools☆138Updated 2 years ago
- Code and models for BERT on STILTs☆53Updated 2 years ago
- The paddle implementation of meta's LLaMA.☆45Updated 2 years ago
- Official repository for LongChat and LongEval☆521Updated last year
- Open Instruction Generalist is an assistant trained on massive synthetic instructions to perform many millions of tasks☆208Updated last year
- Fine-tune SantaCoder for Code/Text Generation.☆192Updated 2 years ago