TianyiPeng / Colab_for_Alpaca_Lora
Here is a Google Colab Notebook for fine-tuning Alpaca Lora (within 3 hours with a 40GB A100 GPU)
☆38Updated last year
Alternatives and similar repositories for Colab_for_Alpaca_Lora:
Users that are interested in Colab_for_Alpaca_Lora are comparing it to the libraries listed below
- Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA☆101Updated 6 months ago
- ☆33Updated last year
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆69Updated last year
- The Next Generation Multi-Modality Superintelligence☆70Updated 4 months ago
- Camel-Coder: Collaborative task completion with multiple agents. Role-based prompts, intervention mechanism, and thoughtful suggestions☆33Updated last year
- ☆84Updated last year
- ☆37Updated last year
- Reinforcement Learning with Heuristic Imperatives - Finetuning LLMs for Post-Conventional Moral Intuition☆64Updated last year
- Patch for MPT-7B which allows using and training a LoRA☆58Updated last year
- Mixing Language Models with Self-Verification and Meta-Verification☆100Updated last month
- QLoRA: Efficient Finetuning of Quantized LLMs☆77Updated 9 months ago
- Lightweight demos for finetuning LLMs. Powered by 🤗 transformers and open-source datasets.☆66Updated 3 months ago
- Advanced Reasoning Benchmark Dataset for LLMs☆45Updated last year
- ☆74Updated last year
- Official code for ACL 2023 (short, findings) paper "Recursion of Thought: A Divide and Conquer Approach to Multi-Context Reasoning with L…☆42Updated last year
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆115Updated last year
- Based on the tree of thoughts paper☆46Updated last year
- SCREWS: A Modular Framework for Reasoning with Revisions☆27Updated last year
- ☆134Updated 2 years ago
- Small and Efficient Mathematical Reasoning LLMs☆71Updated last year
- HuggingChat like UI in Gradio☆69Updated last year
- ☆74Updated last year
- ☆94Updated last year
- Code repo for "Agent Instructs Large Language Models to be General Zero-Shot Reasoners"☆98Updated 4 months ago
- Script for processing OpenAI's PRM800K process supervision dataset into an Alpaca-style instruction-response format☆27Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31Updated 8 months ago
- ☆51Updated 6 months ago
- Scripts for generating synthetic finetuning data for reducing sycophancy.☆108Updated last year
- ☆57Updated last year
- Track the progress of LLM context utilisation☆53Updated 6 months ago