astramind-ai / BitMat
An efficent implementation of the method proposed in "The Era of 1-bit LLMs"
☆154Updated last month
Related projects ⓘ
Alternatives and complementary repositories for BitMat
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆173Updated 4 months ago
- PyTorch implementation of models from the Zamba2 series.☆158Updated this week
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆181Updated last month
- ☆184Updated last month
- Automated Identification of Redundant Layer Blocks for Pruning in Large Language Models☆196Updated 7 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆104Updated 2 months ago
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆262Updated last year
- EfficientQAT: Efficient Quantization-Aware Training for Large Language Models☆226Updated last month
- A toolkit for fine-tuning, inferencing, and evaluating GreenBitAI's LLMs.☆74Updated last month
- PB-LLM: Partially Binarized Large Language Models☆148Updated last year
- ☆118Updated 3 months ago
- ☆69Updated this week
- Fast Matrix Multiplications for Lookup Table-Quantized LLMs☆187Updated this week
- Inference of Mamba models in pure C☆178Updated 8 months ago
- 1.58-bit LLaMa model☆79Updated 7 months ago
- scalable and robust tree-based speculative decoding algorithm☆318Updated 3 months ago
- Micro Llama is a small Llama based model with 300M parameters trained from scratch with $500 budget☆132Updated 7 months ago
- For releasing code related to compression methods for transformers, accompanying our publications☆373Updated last month
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆113Updated 3 weeks ago
- Advanced Quantization Algorithm for LLMs. This is official implementation of "Optimize Weight Rounding via Signed Gradient Descent for t…☆248Updated this week
- Low-Rank adapter extraction for fine-tuned transformers model☆162Updated 6 months ago
- ☆507Updated 3 weeks ago
- A pipeline for LLM knowledge distillation☆78Updated 3 months ago
- Token Omission Via Attention☆121Updated last month
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees"☆350Updated 8 months ago
- ☆94Updated 2 months ago
- This is our own implementation of 'Layer Selective Rank Reduction'☆232Updated 5 months ago
- Manage scalable open LLM inference endpoints in Slurm clusters☆238Updated 4 months ago
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆112Updated last year
- ☆101Updated last month