rasbt / dora-from-scratch
LoRA and DoRA from Scratch Implementations
☆199Updated last year
Alternatives and similar repositories for dora-from-scratch:
Users that are interested in dora-from-scratch are comparing it to the libraries listed below
- Implementation of DoRA☆291Updated 9 months ago
- ☆213Updated 9 months ago
- An extension of the nanoGPT repository for training small MOE models.☆109Updated 3 weeks ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆195Updated 8 months ago
- Implementation of CALM from the paper "LLM Augmented LLMs: Expanding Capabilities through Composition", out of Google Deepmind☆174Updated 6 months ago
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆279Updated last month
- MoRA: High-Rank Updating for Parameter-Efficient Fine-Tuning☆354Updated 7 months ago
- From scratch implementation of a vision language model in pure PyTorch☆207Updated 10 months ago
- ☆195Updated 3 months ago
- Official PyTorch implementation of QA-LoRA☆129Updated last year
- Prune transformer layers☆68Updated 10 months ago
- PyTorch implementation of Infini-Transformer from "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention…☆287Updated 10 months ago
- Set of scripts to finetune LLMs☆37Updated last year
- A family of compressed models obtained via pruning and knowledge distillation☆331Updated 4 months ago
- A minimal implementation of LLaVA-style VLM with interleaved image & text & video processing ability.☆90Updated 3 months ago
- Automated Identification of Redundant Layer Blocks for Pruning in Large Language Models☆227Updated 11 months ago
- Manage scalable open LLM inference endpoints in Slurm clusters☆253Updated 8 months ago
- minimal GRPO implementation from scratch☆65Updated 2 weeks ago
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIM☆54Updated 11 months ago
- This is the official repository for Inheritune.☆111Updated last month
- ☆220Updated 9 months ago
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆230Updated 5 months ago
- ☆158Updated last month
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆509Updated 5 months ago
- Implementation of the Llama architecture with RLHF + Q-learning☆163Updated 2 months ago
- Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"☆123Updated 11 months ago
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆155Updated 9 months ago
- LORA: Low-Rank Adaptation of Large Language Models implemented using PyTorch☆99Updated last year
- Code repository for Black Mamba☆243Updated last year
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆150Updated 3 months ago