gaurangbharti1 / wealth-alpacaLinks
Training Script and Dataset for Wealth Alpaca-LoRa
☆16Updated 2 years ago
Alternatives and similar repositories for wealth-alpaca
Users that are interested in wealth-alpaca are comparing it to the libraries listed below
Sorting:
- User-friendly LLaMA: Train or Run the model using PyTorch. Nothing else.☆339Updated 2 years ago
- Instruct-tune Open LLaMA / RedPajama / StableLM models on consumer hardware using QLoRA☆81Updated 2 years ago
- Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA☆104Updated 8 months ago
- Due to restriction of LLaMA, we try to reimplement BLOOM-LoRA (much less restricted BLOOM license here https://huggingface.co/spaces/bigs…☆184Updated 2 years ago
- Code for fine-tuning Platypus fam LLMs using LoRA☆630Updated last year
- Scripts for fine-tuning Llama2 via SFT and DPO.☆207Updated 2 years ago
- LLM_library is a comprehensive repository serves as a one-stop resource hands-on code, insightful summaries.☆69Updated 2 years ago
- Fine-tuning LLMs using QLoRA☆269Updated last year
- ☆122Updated 2 years ago
- Open-Source Implementation of WizardLM to turn documents into Q:A pairs for LLM fine-tuning☆310Updated last year
- What can I do with a LLM model?☆156Updated 9 months ago
- Domain Adapted Language Modeling Toolkit - E2E RAG☆333Updated last year
- Fine-Tuning Embedding for RAG with Synthetic Data☆524Updated 2 years ago
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆737Updated last year
- ☆34Updated 2 years ago
- 📚 Datasets and models for instruction-tuning☆238Updated 2 years ago
- A bagel, with everything.☆326Updated last year
- batched loras☆349Updated 2 years ago
- ☆86Updated 2 years ago
- ☆95Updated 2 years ago
- A minimum example of aligning language models with RLHF similar to ChatGPT☆225Updated 2 years ago
- Generate textbook-quality synthetic LLM pretraining data☆509Updated 2 years ago
- Here is a Google Colab Notebook for fine-tuning Alpaca Lora (within 3 hours with a 40GB A100 GPU)☆38Updated 2 years ago
- Small and Efficient Mathematical Reasoning LLMs☆73Updated 2 years ago
- A set of scripts and notebooks on LLM finetunning and dataset creation☆115Updated last year
- An open collection of methodologies to help with successful training of large language models.☆550Updated last year
- ☆218Updated last year
- This repository contains code for extending the Stanford Alpaca synthetic instruction tuning to existing instruction-tuned models such as…☆358Updated 2 years ago
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆146Updated 2 years ago
- Small finetuned LLMs for a diverse set of useful tasks☆127Updated 2 years ago