TheSeriousProgrammer / SimpleBitNet
Simple Adaptation of BitNet
☆31Updated 11 months ago
Alternatives and similar repositories for SimpleBitNet:
Users that are interested in SimpleBitNet are comparing it to the libraries listed below
- LoRA and DoRA from Scratch Implementations☆198Updated last year
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆230Updated 4 months ago
- Collection of autoregressive model implementation☆83Updated last month
- Notebooks for fine tuning pali gemma☆97Updated 3 months ago
- Google TPU optimizations for transformers models☆104Updated 2 months ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆196Updated 8 months ago
- The simplest, fastest repository for training/finetuning medium-sized xLSTMs.☆42Updated 10 months ago
- minimal GRPO implementation from scratch☆62Updated 2 weeks ago
- Just some miscellaneous utility functions / decorators / modules related to Pytorch and Accelerate to help speed up implementation of new…☆120Updated 8 months ago
- Tune MPTs☆84Updated last year
- ☆158Updated last month
- Manage scalable open LLM inference endpoints in Slurm clusters☆253Updated 8 months ago
- Minimal example scripts of the Hugging Face Trainer, focused on staying under 150 lines☆198Updated 10 months ago
- Fast, Modern, Memory Efficient, and Low Precision PyTorch Optimizers☆87Updated 8 months ago
- Set of scripts to finetune LLMs☆37Updated 11 months ago
- Pretraining and finetuning for visual instruction following with Mixture of Experts☆12Updated last year
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆81Updated last year
- The AdEMAMix Optimizer: Better, Faster, Older.☆179Updated 6 months ago
- Implementation of the Llama architecture with RLHF + Q-learning☆163Updated last month
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆98Updated 3 months ago
- Prune transformer layers☆68Updated 9 months ago
- ☆131Updated last year
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆154Updated 5 months ago
- ☆120Updated 4 months ago
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIM☆54Updated 11 months ago
- From scratch implementation of a vision language model in pure PyTorch☆205Updated 10 months ago
- Minimal sharded dataset loaders, decoders, and utils for multi-modal document, image, and text datasets.☆156Updated 11 months ago
- GPTQLoRA: Efficient Finetuning of Quantized LLMs with GPTQ☆99Updated last year
- This code repository contains the code used for my "Optimizing Memory Usage for Training LLMs and Vision Transformers in PyTorch" blog po…☆87Updated last year
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆277Updated last month