TheSeriousProgrammer / SimpleBitNetLinks
Simple Adaptation of BitNet
☆32Updated last year
Alternatives and similar repositories for SimpleBitNet
Users that are interested in SimpleBitNet are comparing it to the libraries listed below
Sorting:
- ☆69Updated last year
- Set of scripts to finetune LLMs☆37Updated last year
- LoRA and DoRA from Scratch Implementations☆203Updated last year
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆153Updated 7 months ago
- Training small GPT-2 style models using Kolmogorov-Arnold networks.☆117Updated last year
- Fast, Modern, Memory Efficient, and Low Precision PyTorch Optimizers☆93Updated 10 months ago
- ☆80Updated last year
- A repository for log-time feedforward networks☆222Updated last year
- This code repository contains the code used for my "Optimizing Memory Usage for Training LLMs and Vision Transformers in PyTorch" blog po…☆91Updated last year
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆100Updated 5 months ago
- The AdEMAMix Optimizer: Better, Faster, Older.☆183Updated 8 months ago
- ☆130Updated 9 months ago
- Notebooks for fine tuning pali gemma☆107Updated last month
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆199Updated 10 months ago
- Playground for Transformers☆51Updated last year
- Code repository for Black Mamba☆246Updated last year
- Just some miscellaneous utility functions / decorators / modules related to Pytorch and Accelerate to help speed up implementation of new…☆120Updated 10 months ago
- GPTQLoRA: Efficient Finetuning of Quantized LLMs with GPTQ☆103Updated 2 years ago
- Pretraining and finetuning for visual instruction following with Mixture of Experts☆15Updated last year
- Attempt to make multiple residual streams from Bytedance's Hyper-Connections paper accessible to the public☆83Updated 3 months ago
- This is the code that went into our practical dive using mamba as information extraction☆53Updated last year
- The simplest, fastest repository for training/finetuning medium-sized xLSTMs.☆41Updated last year
- Collection of autoregressive model implementation☆85Updated last month
- PyTorch implementation of Infini-Transformer from "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention…☆289Updated last year
- Implementation of a Light Recurrent Unit in Pytorch☆47Updated 7 months ago
- Memory-Efficient CUDA kernels for training ConvNets with PyTorch.☆41Updated 3 months ago
- MoRA: High-Rank Updating for Parameter-Efficient Fine-Tuning☆356Updated 9 months ago
- Prune transformer layers☆69Updated last year
- Implementation of mamba with rust☆85Updated last year
- ☆29Updated last month