Implementation of "BitNet: Scaling 1-bit Transformers for Large Language Models" in pytorch
☆1,894Feb 6, 2026Updated 3 weeks ago
Alternatives and similar repositories for BitNet
Users that are interested in BitNet are comparing it to the libraries listed below
Sorting:
- 0️⃣1️⃣🤗 BitNet-Transformers: Huggingface Transformers Implementation of "BitNet: Scaling 1-bit Transformers for Large Language Models" i…☆312Mar 17, 2024Updated last year
- Official Pytorch repository for Extreme Compression of Large Language Models via Additive Quantization https://arxiv.org/pdf/2401.06118.p…☆1,315Aug 8, 2025Updated 6 months ago
- 0️⃣1️⃣🤗 BitNet-Transformers: Huggingface Transformers Implementation of "BitNet: Scaling 1-bit Transformers for Large Language Models" i…☆98Mar 1, 2024Updated last year
- Official inference framework for 1-bit LLMs☆28,640Feb 3, 2026Updated 3 weeks ago
- Implementation for MatMul-free LM.☆3,057Dec 2, 2025Updated 2 months ago
- Official implementation of Half-Quadratic Quantization (HQQ)☆913Dec 18, 2025Updated 2 months ago
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆155Oct 15, 2024Updated last year
- ☆70Mar 1, 2024Updated 2 years ago
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection☆1,677Oct 28, 2024Updated last year
- Official repository of Evolutionary Optimization of Model Merging Recipes☆1,399Nov 29, 2024Updated last year
- Tools for merging pretrained large language models.☆6,814Jan 26, 2026Updated last month
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆8,891May 3, 2024Updated last year
- [ICML 2024] BiLLM: Pushing the Limit of Post-Training Quantization for LLMs☆228Jan 11, 2025Updated last year
- Accessible large language models via k-bit quantization for PyTorch.☆7,997Updated this week
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities☆22,033Jan 23, 2026Updated last month
- PyTorch native quantization and sparsity for training and inference☆2,696Feb 22, 2026Updated last week
- Mamba SSM architecture☆17,257Feb 18, 2026Updated last week
- Training LLMs with QLoRA + FSDP☆1,537Nov 9, 2024Updated last year
- RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable)…☆14,375Feb 21, 2026Updated last week
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆751Aug 6, 2025Updated 6 months ago
- Fast and memory-efficient exact attention☆22,361Updated this week
- Implementation of "BitNet: Scaling 1-bit Transformers for Large Language Models" in pytorch☆20Mar 2, 2024Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,838Jun 10, 2024Updated last year
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,441Jul 17, 2025Updated 7 months ago
- PyTorch native post-training library☆5,689Updated this week
- 20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.☆13,182Feb 22, 2026Updated last week
- High-speed Large Language Model Serving for Local Deployment☆8,729Jan 24, 2026Updated last month
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,708Jun 25, 2024Updated last year
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.☆6,184Aug 22, 2025Updated 6 months ago
- ☆580Oct 29, 2024Updated last year
- Tensor library for machine learning☆14,152Updated this week
- A fast inference library for running LLMs locally on modern consumer-class GPUs☆4,444Dec 9, 2025Updated 2 months ago
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆279Nov 3, 2023Updated 2 years ago
- [ACL 2025 Main] EfficientQAT: Efficient Quantization-Aware Training for Large Language Models☆327Nov 26, 2025Updated 3 months ago
- Efficient Triton Kernels for LLM Training☆6,162Updated this week
- Minimal, clean code for the Byte Pair Encoding (BPE) algorithm commonly used in LLM tokenization.☆10,329Jul 1, 2024Updated last year
- PB-LLM: Partially Binarized Large Language Models☆156Nov 20, 2023Updated 2 years ago
- The official implementation of Self-Play Fine-Tuning (SPIN)☆1,235May 8, 2024Updated last year
- Reaching LLaMA2 Performance with 0.1M Dollars☆988Jul 23, 2024Updated last year