gaurangbharti1 / wealth-alpacaLinks
Training Script and Dataset for Wealth Alpaca-LoRa
☆16Updated 2 years ago
Alternatives and similar repositories for wealth-alpaca
Users that are interested in wealth-alpaca are comparing it to the libraries listed below
Sorting:
- Instruct-tune Open LLaMA / RedPajama / StableLM models on consumer hardware using QLoRA☆81Updated last year
- Fine-tuning LLMs using QLoRA☆266Updated last year
- LLM_library is a comprehensive repository serves as a one-stop resource hands-on code, insightful summaries.☆69Updated last year
- Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA☆103Updated 5 months ago
- ☆123Updated 2 years ago
- This repository contains code for extending the Stanford Alpaca synthetic instruction tuning to existing instruction-tuned models such as…☆356Updated 2 years ago
- ☆95Updated 2 years ago
- ☆78Updated last year
- Scripts for fine-tuning Llama2 via SFT and DPO.☆205Updated 2 years ago
- Notus is a collection of fine-tuned LLMs using SFT, DPO, SFT+DPO, and/or any other RLHF techniques, while always keeping a data-first app…☆169Updated last year
- OpenAlpaca: A Fully Open-Source Instruction-Following Model Based On OpenLLaMA☆301Updated 2 years ago
- ☆88Updated last year
- Fine-tune and quantize Llama-2-like models to generate Python code using QLoRA, Axolot,..☆64Updated last year
- A bagel, with everything.☆324Updated last year
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆724Updated last year
- experiments with inference on llama☆103Updated last year
- Domain Adapted Language Modeling Toolkit - E2E RAG☆329Updated last year
- Tune any FALCON in 4-bit☆464Updated 2 years ago
- Lightweight demos for finetuning LLMs. Powered by 🤗 transformers and open-source datasets.☆78Updated last year
- 📚 Datasets and models for instruction-tuning☆237Updated 2 years ago
- A set of scripts and notebooks on LLM finetunning and dataset creation☆110Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆78Updated last year
- Code for fine-tuning Platypus fam LLMs using LoRA☆629Updated last year
- Open-Source Implementation of WizardLM to turn documents into Q:A pairs for LLM fine-tuning☆308Updated last year
- Fine-Tuning Embedding for RAG with Synthetic Data☆514Updated 2 years ago
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆146Updated 2 years ago
- Small finetuned LLMs for a diverse set of useful tasks☆126Updated 2 years ago
- ☆415Updated 2 years ago
- Due to restriction of LLaMA, we try to reimplement BLOOM-LoRA (much less restricted BLOOM license here https://huggingface.co/spaces/bigs…☆184Updated 2 years ago
- Repository for organizing datasets and papers used in Open LLM.☆101Updated 2 years ago