An efficent implementation of the method proposed in "The Era of 1-bit LLMs"
☆155Oct 15, 2024Updated last year
Alternatives and similar repositories for BitMat
Users that are interested in BitMat are comparing it to the libraries listed below
Sorting:
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆176Jun 20, 2024Updated last year
- Official implementation of Half-Quadratic Quantization (HQQ)☆919Feb 26, 2026Updated 3 weeks ago
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆752Aug 6, 2025Updated 7 months ago
- ☆202Dec 5, 2024Updated last year
- 1.58-bit LLaMa model☆83Apr 3, 2024Updated last year
- torch_quantizer is a out-of-box quantization tool for PyTorch models on CUDA backend, specially optimized for Diffusion Models.☆25Mar 29, 2024Updated last year
- Implementation of "BitNet: Scaling 1-bit Transformers for Large Language Models" in pytorch☆1,900Feb 6, 2026Updated last month
- Manage ML configuration with pydantic☆16Updated this week
- ☆48Aug 29, 2024Updated last year
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection☆1,684Oct 28, 2024Updated last year
- An open-sourced PyTorch library for developing energy efficient multiplication-less models and applications.☆14Feb 3, 2025Updated last year
- Tiny ASIC implementation for "The Era of 1-bit LLMs All Large Language Models are in 1.58 Bits" matrix multiplication unit☆181Apr 19, 2024Updated last year
- 0️⃣1️⃣🤗 BitNet-Transformers: Huggingface Transformers Implementation of "BitNet: Scaling 1-bit Transformers for Large Language Models" i…☆313Mar 17, 2024Updated 2 years ago
- ACL 2023☆39Jun 6, 2023Updated 2 years ago
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆281Nov 3, 2023Updated 2 years ago
- Pytorch implementation of our paper accepted by NeurIPS 2022 -- Learning Best Combination for Efficient N:M Sparsity☆22Jan 13, 2023Updated 3 years ago
- ☆68Mar 21, 2025Updated last year
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆276Jul 16, 2025Updated 8 months ago
- Experimental BitNet Implementation☆74Nov 27, 2025Updated 3 months ago
- A library for easily merging multiple LLM experts, and efficiently train the merged LLM.☆507Aug 26, 2024Updated last year
- Supercharge huggingface transformers with model parallelism.☆78Jul 23, 2025Updated 7 months ago
- [ACL 2024] A novel QAT with Self-Distillation framework to enhance ultra low-bit LLMs.☆134May 16, 2024Updated last year
- [ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Mod…☆31Mar 12, 2024Updated 2 years ago
- GPTQ inference Triton kernel☆321May 18, 2023Updated 2 years ago
- ☆138Aug 19, 2024Updated last year
- A byte-level decoder architecture that matches the performance of tokenized Transformers.☆67Apr 24, 2024Updated last year
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆118May 19, 2025Updated 10 months ago
- Official Pytorch repository for Extreme Compression of Large Language Models via Additive Quantization https://arxiv.org/pdf/2401.06118.p…☆1,315Feb 26, 2026Updated 3 weeks ago
- Accelerate your Hugging Face Transformers 7.6-9x. Native to Hugging Face and PyTorch.☆684Aug 22, 2024Updated last year
- A 8-/16-/32-/64-bit floating point number family☆16Feb 4, 2022Updated 4 years ago
- Tools for merging pretrained large language models.☆6,867Mar 15, 2026Updated last week
- The hearth of The Pulsar App, fast, secure and shared inference with modern UI☆59Dec 1, 2024Updated last year
- Let's create synthetic textbooks together :)☆76Jan 29, 2024Updated 2 years ago
- Using fourier interpolation to merge large language models☆11Jan 6, 2026Updated 2 months ago
- Train your own small bitnet model☆78Oct 20, 2024Updated last year
- PyTorch implementation of "Deep Transferring Quantization" (ECCV2020)☆18Jun 22, 2022Updated 3 years ago
- Set of scripts to finetune LLMs☆38Mar 30, 2024Updated last year
- ☆29Oct 9, 2024Updated last year
- An unsupervised model merging algorithm for Transformers-based language models.☆108Apr 29, 2024Updated last year