vihangd / alpaca-qloraLinks
Instruct-tune Open LLaMA / RedPajama / StableLM models on consumer hardware using QLoRA
☆81Updated last year
Alternatives and similar repositories for alpaca-qlora
Users that are interested in alpaca-qlora are comparing it to the libraries listed below
Sorting:
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆135Updated last year
- Reverse Instructions to generate instruction tuning data with corpus examples☆214Updated last year
- Open Source WizardCoder Dataset☆159Updated 2 years ago
- GPTQLoRA: Efficient Finetuning of Quantized LLMs with GPTQ☆103Updated 2 years ago
- Scripts for fine-tuning Llama2 via SFT and DPO.☆203Updated last year
- Due to restriction of LLaMA, we try to reimplement BLOOM-LoRA (much less restricted BLOOM license here https://huggingface.co/spaces/bigs…☆184Updated 2 years ago
- ☆76Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆78Updated last year
- ☆95Updated 2 years ago
- MultilingualSIFT: Multilingual Supervised Instruction Fine-tuning☆94Updated last year
- ☆270Updated 2 years ago
- Unofficial implementation of AlpaGasus☆92Updated last year
- Landmark Attention: Random-Access Infinite Context Length for Transformers☆423Updated last year
- Official codebase for "SelFee: Iterative Self-Revising LLM Empowered by Self-Feedback Generation"☆227Updated 2 years ago
- [EMNLP 2023] The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning☆245Updated last year
- Code for paper titled "Towards the Law of Capacity Gap in Distilling Language Models"☆101Updated last year
- evol augment any dataset online☆60Updated 2 years ago
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆205Updated last year
- A repository to perform self-instruct with a model on HF Hub☆33Updated last year
- Scripts for generating synthetic finetuning data for reducing sycophancy.☆113Updated last year
- Multipack distributed sampler for fast padding-free training of LLMs☆199Updated 11 months ago
- Pre-training code for Amber 7B LLM☆167Updated last year
- ☆180Updated 2 years ago
- [EMNLP 2023] Adapting Language Models to Compress Long Contexts☆309Updated 10 months ago
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆458Updated last year
- This repository contains code for extending the Stanford Alpaca synthetic instruction tuning to existing instruction-tuned models such as…☆352Updated 2 years ago
- ☆311Updated last year
- ☆104Updated 2 years ago
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆78Updated last year
- A bagel, with everything.☆323Updated last year