rasbt / dora-from-scratchLinks
LoRA and DoRA from Scratch Implementations
☆214Updated last year
Alternatives and similar repositories for dora-from-scratch
Users that are interested in dora-from-scratch are comparing it to the libraries listed below
Sorting:
- Implementation of DoRA☆306Updated last year
- An extension of the nanoGPT repository for training small MOE models.☆219Updated 9 months ago
- ☆229Updated last year
- minimal GRPO implementation from scratch☆100Updated 9 months ago
- A set of scripts and notebooks on LLM finetunning and dataset creation☆112Updated last year
- The official implementation of the paper "What Matters in Transformers? Not All Attention is Needed".☆186Updated last month
- From scratch implementation of a vision language model in pure PyTorch☆254Updated last year
- A minimal implementation of LLaVA-style VLM with interleaved image & text & video processing ability.☆97Updated last year
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆202Updated last year
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆349Updated 7 months ago
- This repository contains an implementation of the LLaMA 2 (Large Language Model Meta AI) model, a Generative Pretrained Transformer (GPT)…☆74Updated 2 years ago
- ☆204Updated last year
- Implementation of CALM from the paper "LLM Augmented LLMs: Expanding Capabilities through Composition", out of Google Deepmind☆179Updated last year
- LLM Workshop by Sourab Mangrulkar☆398Updated last year
- ☆225Updated last month
- Set of scripts to finetune LLMs☆38Updated last year
- A compact LLM pretrained in 9 days by using high quality data☆337Updated 8 months ago
- Distributed training (multi-node) of a Transformer model☆90Updated last year
- Repository containing awesome resources regarding Hugging Face tooling.☆48Updated last year
- Let's build better datasets, together!☆267Updated last year
- MoRA: High-Rank Updating for Parameter-Efficient Fine-Tuning☆363Updated last year
- ☆134Updated 2 years ago
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793☆445Updated 7 months ago
- Implementation of the Llama architecture with RLHF + Q-learning☆168Updated 10 months ago
- Prune transformer layers☆74Updated last year
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆162Updated 8 months ago
- This code repository contains the code used for my "Optimizing Memory Usage for Training LLMs and Vision Transformers in PyTorch" blog po…☆92Updated 2 years ago
- A work in progress. Trying to write about all interesting or necessary pieces in the current development of LLMs and generative AI. Gra…☆199Updated 2 years ago
- ☆38Updated last year
- LORA: Low-Rank Adaptation of Large Language Models implemented using PyTorch☆118Updated 2 years ago