TianyiPeng / Colab_for_Alpaca_LoraLinks
Here is a Google Colab Notebook for fine-tuning Alpaca Lora (within 3 hours with a 40GB A100 GPU)
☆38Updated 2 years ago
Alternatives and similar repositories for Colab_for_Alpaca_Lora
Users that are interested in Colab_for_Alpaca_Lora are comparing it to the libraries listed below
Sorting:
- Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA☆104Updated 8 months ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆79Updated last year
- ☆84Updated 2 years ago
- An all-new Language Model That Processes Ultra-Long Sequences of 100,000+ Ultra-Fast☆150Updated last year
- Problem solving by engaging multiple AI agents in conversation with each other and the user.☆236Updated 2 years ago
- Small and Efficient Mathematical Reasoning LLMs☆73Updated 2 years ago
- Patch for MPT-7B which allows using and training a LoRA☆58Updated 2 years ago
- Official codebase for "SelFee: Iterative Self-Revising LLM Empowered by Self-Feedback Generation"☆228Updated 2 years ago
- Local LLM ReAct Agent with Guidance☆159Updated 2 years ago
- Weekly visualization report of Open LLM model performance based on 4 metrics.☆86Updated 2 years ago
- This repository implements the chain of verification paper by Meta AI☆191Updated 2 years ago
- A generative agent implementation for LLaMA based models, derived from langchain's implementation.☆178Updated 2 years ago
- Implementation of Toolformer: Language Models Can Teach Themselves to Use Tools☆144Updated 2 years ago
- ☆95Updated 2 years ago
- ☆74Updated 2 years ago
- The data processing pipeline for the Koala chatbot language model☆118Updated 2 years ago
- ☆33Updated 2 years ago
- ☆78Updated 2 years ago
- ☆278Updated 2 years ago
- Implementation of Google's SELF-DISCOVER☆301Updated last year
- My implementation of "Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models"☆99Updated 2 years ago
- ☆173Updated 2 years ago
- Official code for ACL 2023 (short, findings) paper "Recursion of Thought: A Divide and Conquer Approach to Multi-Context Reasoning with L…☆45Updated 2 years ago
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆146Updated 2 years ago
- Notus is a collection of fine-tuned LLMs using SFT, DPO, SFT+DPO, and/or any other RLHF techniques, while always keeping a data-first app…☆169Updated 2 years ago
- Merge Transformers language models by use of gradient parameters.☆213Updated last year
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆157Updated last year
- TART: A plug-and-play Transformer module for task-agnostic reasoning☆202Updated 2 years ago
- Instruct-tune Open LLaMA / RedPajama / StableLM models on consumer hardware using QLoRA☆81Updated 2 years ago
- [NeurIPS '23 Spotlight] Thought Cloning: Learning to Think while Acting by Imitating Human Thinking☆267Updated last year