An efficent implementation of the method proposed in "The Era of 1-bit LLMs"
☆155Oct 15, 2024Updated last year
Alternatives and similar repositories for BitMat
Users that are interested in BitMat are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆175Jun 20, 2024Updated last year
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆758Aug 6, 2025Updated 8 months ago
- Official implementation of Half-Quadratic Quantization (HQQ)☆925Feb 26, 2026Updated last month
- ☆202Dec 5, 2024Updated last year
- torch_quantizer is a out-of-box quantization tool for PyTorch models on CUDA backend, specially optimized for Diffusion Models.☆25Mar 29, 2024Updated 2 years ago
- NordVPN Special Discount Offer • AdSave on top-rated NordVPN 1 or 2-year plans with secure browsing, privacy protection, and support for for all major platforms.
- Implementation of "BitNet: Scaling 1-bit Transformers for Large Language Models" in pytorch☆1,910Mar 20, 2026Updated 3 weeks ago
- Manage ML configuration with pydantic☆16Mar 18, 2026Updated 3 weeks ago
- 1.58-bit LLaMa model☆83Apr 3, 2024Updated 2 years ago
- ☆48Aug 29, 2024Updated last year
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection☆1,683Oct 28, 2024Updated last year
- An open-sourced PyTorch library for developing energy efficient multiplication-less models and applications.☆14Feb 3, 2025Updated last year
- Tiny ASIC implementation for "The Era of 1-bit LLMs All Large Language Models are in 1.58 Bits" matrix multiplication unit☆186Apr 19, 2024Updated last year
- 0️⃣1️⃣🤗 BitNet-Transformers: Huggingface Transformers Implementation of "BitNet: Scaling 1-bit Transformers for Large Language Models" i…☆315Mar 17, 2024Updated 2 years ago
- ACL 2023☆39Jun 6, 2023Updated 2 years ago
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆280Nov 3, 2023Updated 2 years ago
- Pytorch implementation of our paper accepted by NeurIPS 2022 -- Learning Best Combination for Efficient N:M Sparsity☆22Jan 13, 2023Updated 3 years ago
- ☆68Mar 21, 2025Updated last year
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆278Jul 16, 2025Updated 8 months ago
- Experimental BitNet Implementation☆74Nov 27, 2025Updated 4 months ago
- Supercharge huggingface transformers with model parallelism.☆78Jul 23, 2025Updated 8 months ago
- A library for easily merging multiple LLM experts, and efficiently train the merged LLM.☆510Aug 26, 2024Updated last year
- [ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Mod…☆31Mar 12, 2024Updated 2 years ago
- GPTQ inference Triton kernel☆321May 18, 2023Updated 2 years ago
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- [ACL 2024] A novel QAT with Self-Distillation framework to enhance ultra low-bit LLMs.☆136May 16, 2024Updated last year
- Implementation of CALM from the paper "LLM Augmented LLMs: Expanding Capabilities through Composition", out of Google Deepmind☆179Sep 12, 2024Updated last year
- ☆138Aug 19, 2024Updated last year
- A byte-level decoder architecture that matches the performance of tokenized Transformers.☆67Apr 24, 2024Updated last year
- Official Pytorch repository for Extreme Compression of Large Language Models via Additive Quantization https://arxiv.org/pdf/2401.06118.p…☆1,314Feb 26, 2026Updated last month
- Accelerate your Hugging Face Transformers 7.6-9x. Native to Hugging Face and PyTorch.☆684Aug 22, 2024Updated last year
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆125May 19, 2025Updated 10 months ago
- A 8-/16-/32-/64-bit floating point number family☆16Feb 4, 2022Updated 4 years ago
- The hearth of The Pulsar App, fast, secure and shared inference with modern UI☆59Dec 1, 2024Updated last year
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- Tools for merging pretrained large language models.☆6,945Mar 15, 2026Updated 3 weeks ago
- Let's create synthetic textbooks together :)☆76Jan 29, 2024Updated 2 years ago
- Using fourier interpolation to merge large language models☆11Jan 6, 2026Updated 3 months ago
- Train your own small bitnet model☆78Oct 20, 2024Updated last year
- PyTorch implementation of "Deep Transferring Quantization" (ECCV2020)☆18Jun 22, 2022Updated 3 years ago
- ☆29Oct 9, 2024Updated last year
- Set of scripts to finetune LLMs☆38Mar 30, 2024Updated 2 years ago