Implementation of "BitNet: Scaling 1-bit Transformers for Large Language Models" in pytorch
☆1,900Feb 6, 2026Updated last month
Alternatives and similar repositories for BitNet
Users that are interested in BitNet are comparing it to the libraries listed below
Sorting:
- 0️⃣1️⃣🤗 BitNet-Transformers: Huggingface Transformers Implementation of "BitNet: Scaling 1-bit Transformers for Large Language Models" i…☆313Mar 17, 2024Updated 2 years ago
- 0️⃣1️⃣🤗 BitNet-Transformers: Huggingface Transformers Implementation of "BitNet: Scaling 1-bit Transformers for Large Language Models" i…☆98Mar 1, 2024Updated 2 years ago
- Official Pytorch repository for Extreme Compression of Large Language Models via Additive Quantization https://arxiv.org/pdf/2401.06118.p…☆1,315Feb 26, 2026Updated 3 weeks ago
- ☆70Mar 1, 2024Updated 2 years ago
- Official inference framework for 1-bit LLMs☆35,906Mar 10, 2026Updated last week
- Experimental BitNet Implementation☆74Nov 27, 2025Updated 3 months ago
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆155Oct 15, 2024Updated last year
- Implementation for MatMul-free LM.☆3,060Dec 2, 2025Updated 3 months ago
- Official implementation of Half-Quadratic Quantization (HQQ)☆919Feb 26, 2026Updated 3 weeks ago
- [ICML 2024] BiLLM: Pushing the Limit of Post-Training Quantization for LLMs☆228Jan 11, 2025Updated last year
- Official repository of Evolutionary Optimization of Model Merging Recipes☆1,406Nov 29, 2024Updated last year
- Implementation of "BitNet: Scaling 1-bit Transformers for Large Language Models" in pytorch☆20Mar 2, 2024Updated 2 years ago
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection☆1,684Oct 28, 2024Updated last year
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆752Aug 6, 2025Updated 7 months ago
- Tools for merging pretrained large language models.☆6,867Updated this week
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities☆22,046Jan 23, 2026Updated last month
- Accessible large language models via k-bit quantization for PyTorch.☆8,052Updated this week
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆8,911May 3, 2024Updated last year
- Mamba SSM architecture☆17,524Updated this week
- RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable)…☆14,419Mar 5, 2026Updated 2 weeks ago
- BitLinear implementation☆35Jan 1, 2026Updated 2 months ago
- PyTorch native quantization and sparsity for training and inference☆2,730Updated this week
- Training LLMs with QLoRA + FSDP☆1,540Nov 9, 2024Updated last year
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,463Jul 17, 2025Updated 8 months ago
- ☆580Oct 29, 2024Updated last year
- PB-LLM: Partially Binarized Large Language Models☆156Nov 20, 2023Updated 2 years ago
- Fast and memory-efficient exact attention☆22,832Updated this week
- [ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization☆713Aug 13, 2024Updated last year
- ☆19Apr 29, 2024Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,850Jun 10, 2024Updated last year
- Low-bit LLM inference on CPU/NPU with lookup table☆932Jun 5, 2025Updated 9 months ago
- [ACL 2025 Main] EfficientQAT: Efficient Quantization-Aware Training for Large Language Models☆330Nov 26, 2025Updated 3 months ago
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.☆6,187Aug 22, 2025Updated 6 months ago
- Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".☆2,266Mar 27, 2024Updated last year
- Collection of autoregressive model implementation☆85Feb 23, 2026Updated 3 weeks ago
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆281Nov 3, 2023Updated 2 years ago
- Efficient Triton Kernels for LLM Training☆6,216Updated this week
- 20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.☆13,228Mar 6, 2026Updated 2 weeks ago
- [ACL 2024] A novel QAT with Self-Distillation framework to enhance ultra low-bit LLMs.☆134May 16, 2024Updated last year