kyegomez / BitNetLinks
Implementation of "BitNet: Scaling 1-bit Transformers for Large Language Models" in pytorch
☆1,870Updated 2 weeks ago
Alternatives and similar repositories for BitNet
Users that are interested in BitNet are comparing it to the libraries listed below
Sorting:
- 0️⃣1️⃣🤗 BitNet-Transformers: Huggingface Transformers Implementation of "BitNet: Scaling 1-bit Transformers for Large Language Models" i…☆305Updated last year
- Official repository of Evolutionary Optimization of Model Merging Recipes☆1,356Updated 9 months ago
- Official Pytorch repository for Extreme Compression of Large Language Models via Additive Quantization https://arxiv.org/pdf/2401.06118.p…☆1,286Updated 3 weeks ago
- ☆997Updated 6 months ago
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection☆1,591Updated 10 months ago
- Training LLMs with QLoRA + FSDP☆1,526Updated 9 months ago
- Reaching LLaMA2 Performance with 0.1M Dollars☆988Updated last year
- Accelerate your Hugging Face Transformers 7.6-9x. Native to Hugging Face and PyTorch.☆686Updated last year
- ⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Pl…☆2,168Updated 10 months ago
- llama3.np is a pure NumPy implementation for Llama 3 model.☆988Updated 4 months ago
- PyTorch native quantization and sparsity for training and inference☆2,291Updated this week
- Official implementation of Half-Quadratic Quantization (HQQ)☆868Updated 2 weeks ago
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling☆907Updated 4 months ago
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.☆6,061Updated last week
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,229Updated last month
- Mamba-Chat: A chat LLM based on the state-space model architecture 🐍☆928Updated last year
- Run Mixtral-8x7B models in Colab or consumer desktops☆2,315Updated last year
- A simple, performant and scalable Jax LLM!☆1,885Updated this week
- A pytorch quantization backend for optimum☆984Updated this week
- A PyTorch native platform for training generative AI models☆4,311Updated this week
- Implementation for MatMul-free LM.☆3,032Updated last month
- The PyTorch implementation of Generative Pre-trained Transformers (GPTs) using Kolmogorov-Arnold Networks (KANs) for language modeling☆721Updated 9 months ago
- TinyChatEngine: On-Device LLM Inference Library☆887Updated last year
- PyTorch compiler that accelerates training and inference. Get built-in optimizations for performance, memory, parallelism, and easily wri…☆1,395Updated this week
- An Extensible Deep Learning Library☆2,233Updated this week
- [ICLR2024 spotlight] OmniQuant is a simple and powerful quantization technique for LLMs.☆842Updated 3 months ago
- Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".☆2,171Updated last year
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆2,235Updated 3 months ago
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,606Updated last year
- VPTQ, A Flexible and Extreme low-bit quantization algorithm☆652Updated 4 months ago