rasbt / dora-from-scratch
LoRA and DoRA from Scratch Implementations
☆195Updated 10 months ago
Alternatives and similar repositories for dora-from-scratch:
Users that are interested in dora-from-scratch are comparing it to the libraries listed below
- Implementation of DoRA☆288Updated 7 months ago
- ☆209Updated 7 months ago
- ☆110Updated 3 weeks ago
- ☆121Updated this week
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆190Updated 6 months ago
- Manage scalable open LLM inference endpoints in Slurm clusters☆249Updated 6 months ago
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793☆383Updated last month
- Implementation of the Llama architecture with RLHF + Q-learning☆157Updated last year
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆261Updated 3 weeks ago
- From scratch implementation of a vision language model in pure PyTorch☆192Updated 8 months ago
- A minimal implementation of LLaVA-style VLM with interleaved image & text & video processing ability.☆89Updated last month
- An Open Source Toolkit For LLM Distillation☆442Updated 3 weeks ago
- Efficient LLM Inference over Long Sequences☆349Updated last month
- This code repository contains the code used for my "Optimizing Memory Usage for Training LLMs and Vision Transformers in PyTorch" blog po…☆87Updated last year
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆255Updated last year
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.☆387Updated 9 months ago
- A comprehensive deep dive into the world of tokens☆215Updated 7 months ago
- A set of scripts and notebooks on LLM finetunning and dataset creation☆101Updated 4 months ago
- Annotated version of the Mamba paper☆470Updated 11 months ago
- ☆192Updated last month
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆149Updated last month
- Prune transformer layers☆67Updated 8 months ago
- A compact LLM pretrained in 9 days by using high quality data☆282Updated 2 months ago
- Official PyTorch implementation of QA-LoRA☆122Updated 10 months ago
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆225Updated 2 months ago
- Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"☆123Updated 9 months ago
- Micro Llama is a small Llama based model with 300M parameters trained from scratch with $500 budget☆140Updated 10 months ago
- Implementation of 🥥 Coconut, Chain of Continuous Thought, in Pytorch☆150Updated 3 weeks ago
- Set of scripts to finetune LLMs☆36Updated 10 months ago
- Implementation of CALM from the paper "LLM Augmented LLMs: Expanding Capabilities through Composition", out of Google Deepmind☆173Updated 4 months ago