vihangd / alpaca-qlora
Instruct-tune Open LLaMA / RedPajama / StableLM models on consumer hardware using QLoRA
☆81Updated last year
Alternatives and similar repositories for alpaca-qlora:
Users that are interested in alpaca-qlora are comparing it to the libraries listed below
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆117Updated last year
- Reverse Instructions to generate instruction tuning data with corpus examples☆208Updated last year
- Unofficial implementation of AlpaGasus☆90Updated last year
- Patch for MPT-7B which allows using and training a LoRA☆58Updated last year
- GPTQLoRA: Efficient Finetuning of Quantized LLMs with GPTQ☆99Updated last year
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆204Updated 9 months ago
- ☆94Updated last year
- evol augment any dataset online☆58Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆77Updated 11 months ago
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆75Updated last year
- ☆104Updated last year
- Finetune mistral-7b-instruct for sentence embeddings☆79Updated 10 months ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆140Updated 5 months ago
- Official code for ACL 2023 (short, findings) paper "Recursion of Thought: A Divide and Conquer Approach to Multi-Context Reasoning with L…☆43Updated last year
- Open Instruction Generalist is an assistant trained on massive synthetic instructions to perform many millions of tasks☆208Updated last year
- [ICLR 2023] Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners☆113Updated 6 months ago
- [EMNLP 2023] The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning☆233Updated last year
- Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA☆102Updated 7 months ago
- Lightweight demos for finetuning LLMs. Powered by 🤗 transformers and open-source datasets.☆73Updated 4 months ago
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆150Updated last year
- Tk-Instruct is a Transformer model that is tuned to solve many NLP tasks by following instructions.☆179Updated 2 years ago
- Multipack distributed sampler for fast padding-free training of LLMs☆186Updated 7 months ago
- Small and Efficient Mathematical Reasoning LLMs☆71Updated last year
- Due to restriction of LLaMA, we try to reimplement BLOOM-LoRA (much less restricted BLOOM license here https://huggingface.co/spaces/bigs…☆184Updated last year
- Merge Transformers language models by use of gradient parameters.☆205Updated 7 months ago
- [AAAI 2024] Investigating the Effectiveness of Task-Agnostic Prefix Prompt for Instruction Following☆79Updated 6 months ago
- MultilingualSIFT: Multilingual Supervised Instruction Fine-tuning☆89Updated last year
- This is the repo for the paper Shepherd -- A Critic for Language Model Generation☆218Updated last year
- ☆172Updated last year
- Scripts for fine-tuning Llama2 via SFT and DPO.☆195Updated last year