mikeybellissimo / LoRA-MPT
A repo for finetuning MPT using LoRA. It is currently configured to work with the Alpaca dataset from Stanford but can easily be adapted to use another.
☆18Updated last year
Related projects ⓘ
Alternatives and complementary repositories for LoRA-MPT
- Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA☆101Updated 3 months ago
- Small and Efficient Mathematical Reasoning LLMs☆71Updated 9 months ago
- ☆24Updated last year
- ☆112Updated last month
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆69Updated last year
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆112Updated last year
- evol augment any dataset online☆55Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆77Updated 7 months ago
- ☆48Updated 2 weeks ago
- ☆32Updated last year
- Sakura-SOLAR-DPO: Merge, SFT, and DPO☆115Updated 10 months ago
- ☆93Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year
- Instruct-tune Open LLaMA / RedPajama / StableLM models on consumer hardware using QLoRA☆80Updated 11 months ago
- ☆33Updated 6 months ago
- Retrieval Augmented Generation Generalized Evaluation Dataset☆51Updated this week
- ☆56Updated 9 months ago
- Code, datasets, models for the paper "Automatic Evaluation of Attribution by Large Language Models"☆53Updated last year
- Patch for MPT-7B which allows using and training a LoRA☆58Updated last year
- ☆72Updated last year
- ☆91Updated last week
- Official repo for NAACL 2024 Findings paper "LeTI: Learning to Generate from Textual Interactions."☆63Updated last year
- The Official Repository for "Bring Your Own Data! Self-Supervised Evaluation for Large Language Models"☆109Updated last year
- Tune MPTs☆84Updated last year
- Codebase accompanying the Summary of a Haystack paper.☆72Updated 2 months ago
- [NeurIPS 2023] This is the code for the paper `Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias`.☆141Updated last year
- Scripts for generating synthetic finetuning data for reducing sycophancy.☆107Updated last year
- Lightweight demos for finetuning LLMs. Powered by 🤗 transformers and open-source datasets.☆64Updated last month
- [ICLR 2023] Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners☆111Updated 2 months ago
- ☆41Updated 2 weeks ago