vihangd / alpaca-qlora
Instruct-tune Open LLaMA / RedPajama / StableLM models on consumer hardware using QLoRA
☆80Updated 11 months ago
Related projects ⓘ
Alternatives and complementary repositories for alpaca-qlora
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆112Updated last year
- GPTQLoRA: Efficient Finetuning of Quantized LLMs with GPTQ☆97Updated last year
- Lightweight demos for finetuning LLMs. Powered by 🤗 transformers and open-source datasets.☆64Updated last month
- Unofficial implementation of AlpaGasus☆84Updated last year
- ☆103Updated last year
- Patch for MPT-7B which allows using and training a LoRA☆58Updated last year
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆74Updated 10 months ago
- Due to restriction of LLaMA, we try to reimplement BLOOM-LoRA (much less restricted BLOOM license here https://huggingface.co/spaces/bigs…☆184Updated last year
- Multipack distributed sampler for fast padding-free training of LLMs☆178Updated 3 months ago
- evol augment any dataset online☆55Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆77Updated 7 months ago
- Small and Efficient Mathematical Reasoning LLMs☆71Updated 9 months ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆129Updated 2 months ago
- Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA☆102Updated 3 months ago
- Open Source WizardCoder Dataset☆153Updated last year
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆199Updated 6 months ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers☆416Updated 11 months ago
- MultilingualSIFT: Multilingual Supervised Instruction Fine-tuning☆86Updated last year
- Reverse Instructions to generate instruction tuning data with corpus examples☆207Updated 8 months ago
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆115Updated last year
- Scripts for fine-tuning Llama2 via SFT and DPO.☆186Updated last year
- [EMNLP 2023] The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning☆214Updated last year
- A pipeline for LLM knowledge distillation☆78Updated 3 months ago
- ☆73Updated 10 months ago
- ☆93Updated last year
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆144Updated 9 months ago
- ☆94Updated last year
- [NeurIPS 2023] This is the code for the paper `Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias`.☆141Updated last year
- A bagel, with everything.☆312Updated 7 months ago
- Code repository for the c-BTM paper☆105Updated last year