Implementation of "BitNet: Scaling 1-bit Transformers for Large Language Models" in pytorch
☆1,920Apr 27, 2026Updated this week
Alternatives and similar repositories for BitNet
Users that are interested in BitNet are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- 0️⃣1️⃣🤗 BitNet-Transformers: Huggingface Transformers Implementation of "BitNet: Scaling 1-bit Transformers for Large Language Models" i…☆315Mar 17, 2024Updated 2 years ago
- 0️⃣1️⃣🤗 BitNet-Transformers: Huggingface Transformers Implementation of "BitNet: Scaling 1-bit Transformers for Large Language Models" i…☆98Mar 1, 2024Updated 2 years ago
- Official Pytorch repository for Extreme Compression of Large Language Models via Additive Quantization https://arxiv.org/pdf/2401.06118.p…☆1,318Feb 26, 2026Updated 2 months ago
- ☆70Mar 1, 2024Updated 2 years ago
- Experimental BitNet Implementation☆74Nov 27, 2025Updated 5 months ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- Official inference framework for 1-bit LLMs☆38,495Mar 10, 2026Updated last month
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆155Oct 15, 2024Updated last year
- Implementation for MatMul-free LM.☆3,056Dec 2, 2025Updated 4 months ago
- Official implementation of Half-Quadratic Quantization (HQQ)☆931Feb 26, 2026Updated 2 months ago
- [ICML 2024] BiLLM: Pushing the Limit of Post-Training Quantization for LLMs☆229Jan 11, 2025Updated last year
- Official repository of Evolutionary Optimization of Model Merging Recipes☆1,418Nov 29, 2024Updated last year
- Implementation of "BitNet: Scaling 1-bit Transformers for Large Language Models" in pytorch☆20Mar 2, 2024Updated 2 years ago
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection☆1,689Oct 28, 2024Updated last year
- Tools for merging pretrained large language models.☆7,023Mar 15, 2026Updated last month
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆762Aug 6, 2025Updated 8 months ago
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities☆22,107Jan 23, 2026Updated 3 months ago
- Accessible large language models via k-bit quantization for PyTorch.☆8,168Apr 20, 2026Updated last week
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆8,947May 3, 2024Updated last year
- RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable)…☆14,492Updated this week
- Mamba SSM architecture☆18,118Updated this week
- BitLinear implementation☆35Jan 1, 2026Updated 3 months ago
- PyTorch native quantization and sparsity for training and inference☆2,796Updated this week
- Training LLMs with QLoRA + FSDP☆1,541Nov 9, 2024Updated last year
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,512Jul 17, 2025Updated 9 months ago
- PB-LLM: Partially Binarized Large Language Models☆155Nov 20, 2023Updated 2 years ago
- ☆590Oct 29, 2024Updated last year
- Fast and memory-efficient exact attention☆23,563Updated this week
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,892Jun 10, 2024Updated last year
- [ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization☆718Aug 13, 2024Updated last year
- ☆19Apr 29, 2024Updated 2 years ago
- Low-bit LLM inference on CPU/NPU with lookup table☆953Jun 5, 2025Updated 10 months ago
- [ACL 2025 Main] EfficientQAT: Efficient Quantization-Aware Training for Large Language Models☆337Apr 10, 2026Updated 2 weeks ago
- End-to-end encrypted email - Proton Mail • AdSpecial offer: 40% Off Yearly / 80% Off First Month. All Proton services are open source and independently audited for security.
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.☆6,198Aug 22, 2025Updated 8 months ago
- Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".☆2,293Mar 27, 2024Updated 2 years ago
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI☆1,407Apr 11, 2024Updated 2 years ago
- Collection of autoregressive model implementation☆85Feb 23, 2026Updated 2 months ago
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆280Nov 3, 2023Updated 2 years ago
- Efficient Triton Kernels for LLM Training☆6,315Updated this week
- 20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.☆13,326Updated this week