An efficent implementation of the method proposed in "The Era of 1-bit LLMs"
☆155Oct 15, 2024Updated last year
Alternatives and similar repositories for BitMat
Users that are interested in BitMat are comparing it to the libraries listed below
Sorting:
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆177Jun 20, 2024Updated last year
- Official implementation of Half-Quadratic Quantization (HQQ)☆913Dec 18, 2025Updated 2 months ago
- torch_quantizer is a out-of-box quantization tool for PyTorch models on CUDA backend, specially optimized for Diffusion Models.☆23Mar 29, 2024Updated last year
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆751Aug 6, 2025Updated 6 months ago
- Implementation of "BitNet: Scaling 1-bit Transformers for Large Language Models" in pytorch☆1,894Feb 6, 2026Updated 3 weeks ago
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection☆1,677Oct 28, 2024Updated last year
- Tiny ASIC implementation for "The Era of 1-bit LLMs All Large Language Models are in 1.58 Bits" matrix multiplication unit☆178Apr 19, 2024Updated last year
- ACL 2023☆39Jun 6, 2023Updated 2 years ago
- 0️⃣1️⃣🤗 BitNet-Transformers: Huggingface Transformers Implementation of "BitNet: Scaling 1-bit Transformers for Large Language Models" i…☆312Mar 17, 2024Updated last year
- Supercharge huggingface transformers with model parallelism.☆78Jul 23, 2025Updated 7 months ago
- An open-sourced PyTorch library for developing energy efficient multiplication-less models and applications.☆14Feb 3, 2025Updated last year
- Rust implementation of Surya☆65Mar 1, 2025Updated last year
- ☆203Dec 5, 2024Updated last year
- Transformer with Mu-Parameterization, implemented in Jax/Flax. Supports FSDP on TPU pods.☆32Jun 5, 2025Updated 8 months ago
- [ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Mod…☆31Mar 12, 2024Updated last year
- Track and Collaborate on ML & AI Experiments.☆44Mar 10, 2025Updated 11 months ago
- Pytorch implementation of our paper accepted by NeurIPS 2022 -- Learning Best Combination for Efficient N:M Sparsity☆22Jan 13, 2023Updated 3 years ago
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆112May 19, 2025Updated 9 months ago
- ☆48Aug 29, 2024Updated last year
- PyTorch centric eager mode debugger☆48Dec 16, 2024Updated last year
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees"☆397Feb 24, 2024Updated 2 years ago
- Efficient vector database for hundred millions of embeddings.☆212May 17, 2024Updated last year
- PyTorch implementation of "Deep Transferring Quantization" (ECCV2020)☆18Jun 22, 2022Updated 3 years ago
- BitLinear implementation☆35Jan 1, 2026Updated 2 months ago
- An unsupervised model merging algorithm for Transformers-based language models.☆108Apr 29, 2024Updated last year
- Official repository for ORPO☆471May 31, 2024Updated last year
- Let's create synthetic textbooks together :)☆76Jan 29, 2024Updated 2 years ago
- Benchmarking Mobile Device Control Agents across Diverse Configurations (ICLR 2024 workshop GenAI4DM spotlight presentation; CoLLAs 2025)☆35Jul 21, 2025Updated 7 months ago
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆277Jul 16, 2025Updated 7 months ago
- A library for easily merging multiple LLM experts, and efficiently train the merged LLM.☆507Aug 26, 2024Updated last year
- ☆67Mar 21, 2025Updated 11 months ago
- PyTorch Implementation of the paper "MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training"☆26Feb 9, 2026Updated 2 weeks ago
- Experimental BitNet Implementation☆74Nov 27, 2025Updated 3 months ago
- Adaptation of titans-pytorch to llama models on HF☆26Mar 6, 2025Updated 11 months ago
- Official Pytorch repository for Extreme Compression of Large Language Models via Additive Quantization https://arxiv.org/pdf/2401.06118.p…☆1,315Aug 8, 2025Updated 6 months ago
- [NeurIPS 2024] BAdam: A Memory Efficient Full Parameter Optimization Method for Large Language Models☆286Mar 15, 2025Updated 11 months ago
- Tools for merging pretrained large language models.☆6,814Jan 26, 2026Updated last month
- Set of scripts to finetune LLMs☆38Mar 30, 2024Updated last year
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆279Nov 3, 2023Updated 2 years ago