rasbt / dora-from-scratchLinks
LoRA and DoRA from Scratch Implementations
☆211Updated last year
Alternatives and similar repositories for dora-from-scratch
Users that are interested in dora-from-scratch are comparing it to the libraries listed below
Sorting:
- Implementation of DoRA☆307Updated last year
- An extension of the nanoGPT repository for training small MOE models.☆210Updated 8 months ago
- ☆230Updated last year
- Implementation of the Llama architecture with RLHF + Q-learning☆167Updated 9 months ago
- minimal GRPO implementation from scratch☆99Updated 8 months ago
- A work in progress. Trying to write about all interesting or necessary pieces in the current development of LLMs and generative AI. Gra…☆198Updated 2 years ago
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆346Updated 6 months ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆202Updated last year
- This code repository contains the code used for my "Optimizing Memory Usage for Training LLMs and Vision Transformers in PyTorch" blog po…☆91Updated 2 years ago
- From scratch implementation of a vision language model in pure PyTorch☆248Updated last year
- Implementation of CALM from the paper "LLM Augmented LLMs: Expanding Capabilities through Composition", out of Google Deepmind☆177Updated last year
- The official implementation of the paper "What Matters in Transformers? Not All Attention is Needed".☆179Updated 7 months ago
- A minimal implementation of LLaVA-style VLM with interleaved image & text & video processing ability.☆96Updated 10 months ago
- A set of scripts and notebooks on LLM finetunning and dataset creation☆111Updated last year
- This repository contains an implementation of the LLaMA 2 (Large Language Model Meta AI) model, a Generative Pretrained Transformer (GPT)…☆72Updated 2 years ago
- A comprehensive deep dive into the world of tokens☆226Updated last year
- ☆202Updated 11 months ago
- ☆225Updated 3 weeks ago
- Toolkit for attaching, training, saving and loading of new heads for transformer models☆290Updated 8 months ago
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆162Updated 7 months ago
- Collection of autoregressive model implementation☆86Updated 6 months ago
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆302Updated last week
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆256Updated 2 years ago
- Minimal example scripts of the Hugging Face Trainer, focused on staying under 150 lines☆195Updated last year
- Official PyTorch implementation of QA-LoRA☆143Updated last year
- A easy, reliable, fluid template for python packages complete with docs, testing suites, readme's, github workflows, linting and much muc…☆191Updated 3 weeks ago
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793☆440Updated 6 months ago
- Distributed training (multi-node) of a Transformer model☆86Updated last year
- Manage scalable open LLM inference endpoints in Slurm clusters☆274Updated last year
- Minimal sharded dataset loaders, decoders, and utils for multi-modal document, image, and text datasets.☆159Updated last year