sunildkumar / lora_from_scratchLinks
Implements Low-Rank Adaptation(LoRA) Finetuning from scratch
☆80Updated 2 years ago
Alternatives and similar repositories for lora_from_scratch
Users that are interested in lora_from_scratch are comparing it to the libraries listed below
Sorting:
- Implementation of the Llama architecture with RLHF + Q-learning☆167Updated 8 months ago
- Collection of autoregressive model implementation☆86Updated 5 months ago
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆102Updated 9 months ago
- Training small GPT-2 style models using Kolmogorov-Arnold networks.☆120Updated last year
- LoRA and DoRA from Scratch Implementations☆211Updated last year
- ☆81Updated last year
- ☆49Updated last year
- ☆88Updated last year
- Code repository for Black Mamba☆256Updated last year
- A comprehensive deep dive into the world of tokens☆226Updated last year
- ☆95Updated 2 years ago
- minimal GRPO implementation from scratch☆98Updated 6 months ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆132Updated last year
- Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorch☆230Updated last year
- This repository's goal is to precompile all past presentations of the Huggingface reading group☆48Updated last year
- Toolkit for attaching, training, saving and loading of new heads for transformer models☆288Updated 7 months ago
- Prune transformer layers☆69Updated last year
- ☆91Updated last year
- This is the code that went into our practical dive using mamba as information extraction☆55Updated last year
- Set of scripts to finetune LLMs☆38Updated last year
- This code repository contains the code used for my "Optimizing Memory Usage for Training LLMs and Vision Transformers in PyTorch" blog po…☆92Updated 2 years ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆164Updated 3 months ago
- Project 2 (Building Large Language Models) for Stanford CS324: Understanding and Developing Large Language Models (Winter 2022)☆105Updated 2 years ago
- σ-GPT: A New Approach to Autoregressive Models☆68Updated last year
- Minimal example scripts of the Hugging Face Trainer, focused on staying under 150 lines☆195Updated last year
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆105Updated 7 months ago
- An extension of the nanoGPT repository for training small MOE models.☆196Updated 7 months ago
- Implementation of CALM from the paper "LLM Augmented LLMs: Expanding Capabilities through Composition", out of Google Deepmind☆177Updated last year
- $100K or 100 Days: Trade-offs when Pre-Training with Academic Resources☆147Updated last week
- QLoRA with Enhanced Multi GPU Support☆37Updated 2 years ago