TheSeriousProgrammer / SimpleBitNetLinks
Simple Adaptation of BitNet
☆32Updated last year
Alternatives and similar repositories for SimpleBitNet
Users that are interested in SimpleBitNet are comparing it to the libraries listed below
Sorting:
- ☆69Updated last year
- Training small GPT-2 style models using Kolmogorov-Arnold networks.☆120Updated last year
- A repository for log-time feedforward networks☆222Updated last year
- LoRA and DoRA from Scratch Implementations☆206Updated last year
- Code repository for Black Mamba☆250Updated last year
- The AdEMAMix Optimizer: Better, Faster, Older.☆183Updated 10 months ago
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆153Updated 9 months ago
- Huggingface compatible implementation of RetNet (Retentive Networks, https://arxiv.org/pdf/2307.08621.pdf) including parallel, recurrent,…☆226Updated last year
- Simple, minimal implementation of the Mamba SSM in one pytorch file. Using logcumsumexp (Heisen sequence).☆120Updated 9 months ago
- The repository for the code of the UltraFastBERT paper☆516Updated last year
- PyTorch Implementation of Jamba: "Jamba: A Hybrid Transformer-Mamba Language Model"☆173Updated 3 months ago
- Annotated version of the Mamba paper☆486Updated last year
- Collection of autoregressive model implementation☆85Updated 2 months ago
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆101Updated 6 months ago
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆232Updated 8 months ago
- Just some miscellaneous utility functions / decorators / modules related to Pytorch and Accelerate to help speed up implementation of new…☆124Updated 11 months ago
- Set of scripts to finetune LLMs☆37Updated last year
- Notebooks for fine tuning pali gemma☆111Updated 3 months ago
- Best practices & guides on how to write distributed pytorch training code☆450Updated 4 months ago
- LORA: Low-Rank Adaptation of Large Language Models implemented using PyTorch☆112Updated last year
- Minimal example scripts of the Hugging Face Trainer, focused on staying under 150 lines☆197Updated last year
- Deep learning library implemented from scratch in numpy. Mixtral, Mamba, LLaMA, GPT, ResNet, and other experiments.☆50Updated last year
- PyTorch implementation of Infini-Transformer from "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention…☆290Updated last year
- Kolmogorov-Arnold Networks (KAN) using Chebyshev polynomials instead of B-splines.☆381Updated last year
- Toolkit for attaching, training, saving and loading of new heads for transformer models☆282Updated 4 months ago
- ☆304Updated last year
- ☆133Updated last year
- Build high-performance AI models with modular building blocks☆533Updated this week
- Variations of Kolmogorov-Arnold Networks☆115Updated last year
- A Jax-based library for building transformers, includes implementations of GPT, Gemma, LlaMa, Mixtral, Whisper, SWin, ViT and more.☆290Updated 10 months ago